Research article Special Issues

Structured backward errors analysis for generalized saddle point problems arising from the incompressible Navier-Stokes equations

  • Recently, a number of fast iteration methods for the solution of the structured linear system arising from the incompressible Navier-Stokes equations have been proposed by some authors. In order to evaluate the strong stability of these numerical algorithms, in this paper we deal with the structured backward error analysis for this type of structured linear system and present the explicit formula of the structured backward error. Based on the structured backward error, we perform some numerical experiments to compare the availability of some existing numerical algorithms.

    Citation: Peng Lv. Structured backward errors analysis for generalized saddle point problems arising from the incompressible Navier-Stokes equations[J]. AIMS Mathematics, 2023, 8(12): 30501-30510. doi: 10.3934/math.20231558

    Related Papers:

    [1] Xin Zhao, Xin Liu, Jian Li . Convergence analysis and error estimate of finite element method of a nonlinear fluid-structure interaction problem. AIMS Mathematics, 2020, 5(5): 5240-5260. doi: 10.3934/math.2020337
    [2] Yuwen He, Jun Li, Lingsheng Meng . Three effective preconditioners for double saddle point problem. AIMS Mathematics, 2021, 6(7): 6933-6947. doi: 10.3934/math.2021406
    [3] Jin-Song Xiong . Generalized accelerated AOR splitting iterative method for generalized saddle point problems. AIMS Mathematics, 2022, 7(5): 7625-7641. doi: 10.3934/math.2022428
    [4] Yasir Nadeem Anjam . Singularities and regularity of stationary Stokes and Navier-Stokes equations on polygonal domains and their treatments. AIMS Mathematics, 2020, 5(1): 440-466. doi: 10.3934/math.2020030
    [5] Luigi Accardi, El Gheted Soueidi, Abdessatar Souissi, Mohamed Rhaima, Farrukh Mukhamedov, Farzona Mukhamedova . Structure of backward quantum Markov chains. AIMS Mathematics, 2024, 9(10): 28044-28057. doi: 10.3934/math.20241360
    [6] Zhuo-Hong Huang . A generalized Shift-HSS splitting method for nonsingular saddle point problems. AIMS Mathematics, 2022, 7(7): 13508-13536. doi: 10.3934/math.2022747
    [7] Danxia Wang, Ni Miao, Jing Liu . A second-order numerical scheme for the Ericksen-Leslie equation. AIMS Mathematics, 2022, 7(9): 15834-15853. doi: 10.3934/math.2022867
    [8] Tiantian Zhang, Wenwen Xu, Xindong Li, Yan Wang . Multipoint flux mixed finite element method for parabolic optimal control problems. AIMS Mathematics, 2022, 7(9): 17461-17474. doi: 10.3934/math.2022962
    [9] Salman Zeb, Muhammad Yousaf, Aziz Khan, Bahaaeldin Abdalla, Thabet Abdeljawad . Updating $ QR $ factorization technique for solution of saddle point problems. AIMS Mathematics, 2023, 8(1): 1672-1681. doi: 10.3934/math.2023085
    [10] Shu-Xin Miao, Jing Zhang . On Uzawa-SSI method for non-Hermitian saddle point problems. AIMS Mathematics, 2020, 5(6): 7301-7315. doi: 10.3934/math.2020467
  • Recently, a number of fast iteration methods for the solution of the structured linear system arising from the incompressible Navier-Stokes equations have been proposed by some authors. In order to evaluate the strong stability of these numerical algorithms, in this paper we deal with the structured backward error analysis for this type of structured linear system and present the explicit formula of the structured backward error. Based on the structured backward error, we perform some numerical experiments to compare the availability of some existing numerical algorithms.



    In this paper, we consider the structured backward errors analysis for the structured linear system arising from the incompressible Navier-Stokes equations with the following form [9]:

    Au=[A10BT10A2BT2B1B2C][u1u2p]=[f1f2g]=b, (1.1)

    where A1Rn1×n1,A2Rn2×n2 are nonsymmetric positive definite matrices, B1Rm×n1,B2Rm×n2 has full row ranks, and CSRm×m is a symmetric positive semi-definite matrix; Rm×n and SRm×m are the sets of m×n real matrices and m×m real symmetric matrices, respectively. These constraints guarantee the existence and uniqueness of the solution of the structured linear system (1.1).

    Recently, there is a great variety of fast preconditioned Krylov subspace methods for solving the structured linear system (1.1) based on the specific block structure of coefficient matrix A, such as dimensional splitting (DS) [1], relaxed dimensional factorization (RDF)[2], relaxed splitting (RS) [22], modified dimensional split (MDS) [5], generalized relaxed splitting (GRS) [4], modified relaxed splitting (MRS) [10], relaxed block upper-lower triangular (RBULT) [16], relaxed upper and lower triangular splitting (RULT) [7], inexact modified relaxed splitting (IMRS) [15] preconditioned GMRES methods, and so on. In order to verify the validity and the strong stability of these numerical algorithms, one can performed the structured backward error analysis for the structured linear system (1.1). Given an approximate solution to a certain structured problem, structured backward error analysis involves finding a structure-preserving perturbation in the data of minimal size such that the approximate solution is an exact solution of the structure-preserving perturbed problem. The size of the smallest structure-preserving perturbation is called the structured backward error. In matrix computations, structured backward error analysis is useful not only to examine the structured stability (or strong stability [3]) of numerical algorithm, but also to design effective stopping criteria for the iterative solution of large sparse structured systems.

    There has been substantial interest in structured backward error analysis in recent years. To our best knowledge, some scholars [6,17,18,20,23,24] have performed the structured backward error analysis for some standard or generalized saddle point systems. Although the block structured linear system (1.1) can be viewed as a generalized 2×2 block saddle point problem, the aforementioned structured backward error analysis does not exactly show the case of the system (1.1) due to its special block structure. Naturally, a new and detailed analysis for the structured backward error of the linear system (1.1) with such special structure need to be performed. This paper will focus on this topic.

    This paper is organized as follows. In Section 2, we first define the structured backward error of the structured linear system (1.1), and then derive its exact and computable formula. Based on the formula of structured backward error, in Section 3, we perform some numerical experiments to compare the validity of some existing numerical algorithms. Finally, in Section 4, we present some conclusions.

    Similar to the structured backward error analysis for standard or generalized saddle point problems (see [6,17,18,20,23,24]), we define the structured backward error for the linear system (1.1).

    Let ˜u=(˜uT1,˜uT2,˜pT)T be the computed solution of the system (1.1), its parameterized structured backward error η(θ1,θ2,θ3,θ4,λ1,λ2,λ3)(˜u1,˜u2,˜p) can be defined as

    η(θ1,θ2,θ3,θ4,λ1,λ2,λ3)(˜u1,˜u2,˜p)=min(ΔA1,ΔA2,ΔB1,ΔB2,ΔC,Δf1,Δf2,Δg)F[ΔA10θ3ΔBT1λ1Δf10θ1ΔA2θ4ΔBT2λ2Δf2θ3ΔB1θ4ΔB2θ2ΔCλ3Δg]F, (2.1)

    where the set F is defined by

    F={(ΔA1,ΔA2,ΔB1,ΔB2,ΔC,Δf1,Δf2,Δg):[A1+ΔA10(B1+ΔB1)T0A2+ΔA2(B2+ΔB2)T(B1+ΔB1)(B2+ΔB2)C+ΔC][˜u1˜u2˜p]=[f1+Δf1f2+Δf2gΔg],ΔC=ΔCT} (2.2)

    and θ1,θ2,θ3,θ4,λ1,λ2,λ3 are positive parameters that can be adjusted to emphasize the requisite perturbations more than others. A special set of selections is

    ˜θ1A1FA2F,˜θ2A1FCF,˜θ3A1FB1F,˜θ4A1FB2F,˜λ1A1Ff1F,˜λ2A1Ff22,˜λ3A1Fg2 (2.3)

    which yields the relative structured backward error

    ηS(˜u)=η(˜θ1,˜θ2,˜θ3,˜θ4,˜λ1,˜λ2,˜λ3)(˜u1,˜u2,˜p)/A1F, (2.4)

    where F and 2 denote the Frobenius norm and Euclidean norm, respectively.

    It can be seen from the above definition that a small ηS(˜u) means the computed solution ˜u=(˜uT1,˜uT2,˜pT)T is the exact solution of a slightly perturbed structure-preserving linear system. We call ˜u=(˜uT1,˜uT2,˜pT)T a structured backward stable solution to (1.1) and the corresponding numerical algorithm is structured backward stable if ηS(˜u) is a small multiple of the machine precision. Consequently, finding the exact and computable formula of the structured backward errors η(θ1,θ2,θ3,θ4,λ1,λ2,λ3)(˜u1,˜u2,˜p) will be useful for testing the strong stability of a practical algorithm.

    In the following, we give the explicit expression of parameterized structured backward error η(θ1,θ2,θ3,θ4,λ1,λ2,λ3)(˜u1,˜u2,˜p).

    Theorem 2.1. Let ˜u=(˜uT1,˜uT2,˜pT)T with ˜p0 be a computed solution to the structured linear system (1.1). Then

    η(θ1,θ2,θ3,θ4,λ1,λ2,λ3)(˜u1,˜u2,˜p)=PKd2, (2.5)

    where

    K=[θ3Imn1000θ4Imn2000λ3ImIn1˜pTˆu12000In2˜pTˆu220θ2(˜uT1Im)˜p2θ2(˜uT2Im)˜p2θ2Im˜p2θ2(˜uT1(Im˜p˜p))˜p2θ2(˜uT2(Im˜p˜p))˜p2θ2(Im˜p˜p)˜p2]Rs×t,d=[000rf1ˆu12rf2ˆu22θ2rg˜p2θ2(Im˜p˜p)rg˜p2]Rs, (2.6)
    rf1=f1A1˜u1BT1˜p,rf2=f2A2˜u2BT2˜p,rg=g+B1˜u1+B2˜u2C˜p, (2.7)
    ˆu1=(˜uT1,1/λ1)T,ˆu2=(˜uT2/θ1,1/λ2)T, (2.8)

    and

    s=mn1+mn2+3m+n1+n2,t=mn1+mn2+m.

    Proof. It is seen from (2.2) that (ΔA1,ΔA2,ΔB1,ΔB2,ΔC,Δf1,Δf2,Δg)F if and only if ΔA1, ΔA2, ΔB1, ΔB2, ΔC, Δf1, Δf2, Δg satisfy

    ΔA1˜u1Δf1=rf1ΔBT1˜p,ΔA2˜u2Δf2=rf2ΔBT2˜p,ΔC˜p=wandΔC=ΔCT, (2.9)

    i.e.,

    (ΔA1,λ1Δf1)(˜u11λ1)=rf1ΔBT1˜p,(θ1ΔA2,λ2Δf2)(1θ1˜u21λ2)=rf2ΔBT2˜p,

    and

    (θ2ΔC)(1θ2˜p)=w,ΔC=ΔCT,

    where

    w:=rgΔg+ΔB1˜u1+ΔB2˜u2.

    From the above equations and (2.8), and using the well-known conclusions of Lemmas 2.1 and 2.2 in [20] or in [21], we have

    (ΔA1,λ1Δf1)=(rf1ΔBT1˜p)ˆu1+Z1(In1+1ˆu1ˆu1),
    (θ1ΔA2,λ2Δf2)=(rf2ΔBT2˜p)ˆu2+Z2(In2+1ˆu2ˆu2),

    and

    θ2ΔC=θ2w˜p+θ2(w˜p)T(Im˜p˜p)+(Im˜p˜p)T(Im˜p˜p),

    where Z1Rn1×(n1+1),Z2Rn2×(n2+1),TRm×m. Due to the fact that

    ˆu1(In1+1ˆu1ˆu1)=0,ˆu2(In2+1ˆu2ˆu2)=0

    and ˜p(Im˜p˜p)=0, we have

    ΔA12F+λ21Δf122=rf1ΔBT1˜p22ˆu122+Z1(In1+1ˆu1ˆu1)2F, (2.10)
    θ21ΔA22F+λ22Δf222=rf2ΔBT2˜p22ˆu222+Z2(In2+1ˆu2ˆu2)2F, (2.11)

    and

    θ22ΔC2F=θ22w22˜p22+θ22(Im˜p˜p)w22˜p22+(Im˜p˜p)T(Im˜p˜p)2F. (2.12)

    It follows from the definition (2.1) of η(θ1,θ2,θ3,θ4,λ1,λ2,λ3)(˜u1,˜u2,˜p) and (2.10)–(2.12) that

    (η(θ1,θ2,θ3,θ4,λ1,λ2,λ3)(˜u1,˜u2,˜p))2=minZ1R(n1+1)×(n1+1),Z2R(n2+1)×(n2+1),TRm×m,ΔB1Rm×n1,ΔB2Rm×n2,ΔgRm{ΔA12F+θ21ΔA22F+θ22ΔC2F+θ23ΔB12F+θ24ΔB22F+λ21Δf122+λ22Δf222+λ23Δg22}=minΔB1Rm×n1,ΔB2Rm×n2,ΔgRmp(ΔB1,ΔB2,Δg),

    where

    p(ΔB1,ΔB2,Δg)=rf1ΔBT1˜p22ˆu122+rf2ΔBT2˜p22ˆu222+θ22w22˜p22+θ22(Im˜p˜p)w22˜p22+θ23ΔB12F+θ24ΔB22F+λ23Δg22.

    Using the Kronecker product [11,12], we have

    vec(rf1ΔBT1˜p)=rf1(In1˜pT)vec(ΔB1),vec(rf2ΔBT2˜p)=rf2(In2˜pT)vec(ΔB2),
    vec(w)=rgΔg+(˜uT1Im)vec(ΔB1)+(˜uT2Im)vec(ΔB2),

    and

    vec((Im˜p˜p)w)=(Im˜p˜p)rg(Im˜p˜p)Δg+(˜uT1(Im˜p˜p))vec(ΔB1)+(˜uT2(Im˜p˜p))vec(ΔB2).

    Then

    (η(θ1,θ2,θ3,θ4,λ1,λ2,λ3)(˜u1,˜u2,˜p))2=minΔB1Rm×n1,ΔB2Rm×n2,ΔgRmK(vec(ΔB1)vec(ΔB2)Δg)+d22=(IsKK)d22,

    in which K and d are defined as those in (2.6), and here K stands for the Moore-Penrose inverse [11] of K.

    Reconsidering the structured backward error η(θ1,θ2,θ3,θ4,λ1,λ2,λ3)(˜u1,˜u2,˜p), by broadening the structure-preserving constraint in (2.2) to unstructured constraint, we can get the relative unstructured backward error ηA,b(˜u) which defined by [13,19]

    ηA,b(˜u)=minΔA,Δb{(ΔAFAF,Δb2b2)F:(A+ΔA)˜u=b+Δb}=bA˜u2A2F˜u22+b22.

    In addition, if only the right-side is perturbed, yields

    ηb(˜u)=minΔb{Δb2b2:A˜u=b+Δb}=bA˜u2b2,

    which often used as the stopping criterion for the iterative methods.

    We note that a small ηA,b(˜u) means the computed solution ˜u=(˜uT1,˜uT2,˜pT)T is the exact solution of a slightly perturbed linear system. We call ˜u=(˜uT1,˜uT2,˜pT)T a backward stable solution to (1.1) and the corresponding numerical algorithm is backward stable if ηA,b(˜u) is a small multiple of the machine precision. It is worth noting that a backward stable solution may be not the exact solution of a slightly perturbed structure-preserving linear system (1.1). In other words, the relative structured backward error ηS(˜u) may be much large than the relative unstructured backward error ηA,b(˜u). Next, we given an example to illustrate it.

    Example 2.1. Consider the structured linear system (1.1) with

    A1=M(1:3,1:3),A1=M(4:6,4:6),B1=B2=[00101010400],C=1014×[121260100]

    and

    f1=(108,10,0)T,f2=(108,1,0)T,g=(108,0,0)T,

    where M=D1PD2,D1=diag(1,5,10,50,100,104),D2=diag(1,5,100,1,5,10) and P is the Pascal matrix of order 6. Using Gaussian elimination with partial pivoting, we obtain a computed solution ˜u=(˜uT1,˜uT2,˜pT)T with

    ˜u1=(4.5172×1019.8272×1021.4527×102),˜u2=(4.5172×1019.8272×1021.4527×102),˜p=(7.6937×1012.9135×1011.0000×104).

    Then, in view of (2.4), we have the relative structured backward error

    ηS(˜u)=7.7318×1006,

    and the relative unstructured backward error

    ηA,b(˜u)=5.8850×1020,ηb(˜u)=1.0812×1016.

    It is seen from the above results that Gaussian elimination with partial pivoting for solving this problem is backward stable but not structured backward stable, and the relative structured backward error ηS(˜u) can indeed be much larger than the unstructured backward error ηA,b(˜u). This implies that the structured backward error provides a more reliable measure for assessing accuracy of a computed solution of the structured linear system (1.1).

    In this section, we will present some test examples to examine the stability and effectiveness of some existing preconditioners for the generalized saddle point problem (1.1) by the structured backward error analysis. These problems arises from the discretization of the 2D linearized steady-state Navier-Stokes equation, i.e., the steady Oseen equation of the form:

    {vΔu+(ω)u+p=f,u=0,inΩ, (3.1)

    where Ω is a bounded domain, v>0 is the viscosity, and ω is the viscosity field. The vector field u stands for the velocity, and p represents the pressure. We use the IFISS software package developed by Elman et al. [8] to generate discretizations of the "regularized" two-dimensional lid-driven cavity problem for the Oseen equation (3.1). The mixed finite element used here is the bilinear pressure Q1-P0 pair with local stabilization. In addition, we use the uniform grids of increasing size and the known viscosity scalar and others are follows the default setting.

    We apply the GMRES method in conjunction with the preconditioners RBULT [16], MRS [10], GRS [4] and MDS [5] to solve the generalized saddle point problem (1.1). All runs are started from the initial zero vector and terminated if the current iterations satisfy RES=bA˜u2/b2<1014. In actual computations, the subsystems of linear equations arising in the applications of the preconditioners are solved by the Cholesky or the LU factorization in combination with AMD or column AMD reordering. We choose the parameters in the preconditioned GMRES methods by using the algebraic estimation technique [14]. The symbols "IT" and "CPU" stand for the iteration counts and total CPU time respectively. All experiments were run on a PC with 3.30 GHz central processing unit (Intel(R) Core(TM) i7-11370H), 16 GB memory and Windows 10 operating system using MATLAB 2014a with machine precision 2.2204×1016.

    For different grids and viscosities, the iteration counts and elapsed CPU times of GMRES with four preconditioners, the residual, the unstructured backward errors ηA,b(˜u) and the structured backward errors ηS(˜u) with respect to the final iteration solutions ˜u=(˜uT1,˜uT2,˜pT)T are listed in Tables 14. It is seen from Tables 14 that the structured backward errors are about one order of magnitude larger than the unstructured one for each test problem and they are both of the order of unit round-off, which indicates that the preconditioned GMRES methods are backward stable and strongly stable for solving these test problems. In addition, we see from the iteration numbers, the elapsed CPU times and the structured backward errors that the MRS preconditioned GMRES method is more accuracy (strongly stable) and effective than those of the other preconditioned GMRES methods.

    Table 1.  Preconditioned GMRES methods numerical results for the Oseen problem with v=0.5.
    Grids RBULT MRS GRS MDS
    IT 13 15 16 21
    CPU 0.0526 0.0097 0.0058 0.0111
    4×4 RES 2.7696e-15 4.3654e-16 3.8489e-15 2.9127e-15
    ηA,b(˜u) 1.3540e-16 4.8642e-17 2.8253e-16 2.4661e-16
    ηS(˜u) 4.5049e-16 2.7545e-16 1.3018e-15 8.4573e-16
    IT 31 22 32 29
    CPU 0.0727 0.0240 0.0362 0.0384
    8×8 RES 8.1142e-15 2.6728e-15 7.0315e-15 8.3449e-15
    ηA,b(˜u) 1.2244e-16 4.6301e-17 1.0205e-16 9.5598e-17
    ηS(˜u) 2.6232e-15 3.9443e-16 2.1595e-15 9.2548e-16

     | Show Table
    DownLoad: CSV
    Table 2.  Preconditioned GMRES methods numerical results for the Oseen problem with v=0.1.
    Grids RBULT MRS GRS MDS
    IT 13 15 16 21
    CPU 0.0402 0.0099 0.0072 0.0123
    4×4 RES 1.3354e-15 6.3830e-16 4.3890e-15 2.9338e-15
    ηA,b(˜u) 1.0812e-16 5.6560e-17 3.5954e-16 2.5398e-16
    ηS(˜u) 3.4183e-16 2.9241e-16 1.9758e-15 8.9378e-16
    IT 31 22 32 29
    CPU 0.0760 0.0264 0.0357 0.0373
    8×8 RES 9.6628e-15 2.4086e-15 9.2769e-15 7.6370e-15
    ηA,b(˜u) 1.5059e-16 4.1375e-17 1.3495e-16 8.9264e-17
    ηS(˜u) 3.1619e-15 3.6270e-16 2.8529e-15 8.6730e-16

     | Show Table
    DownLoad: CSV
    Table 3.  Preconditioned GMRES methods numerical results for the Oseen problem with v=0.05.
    Grids RBULT MRS GRS MDS
    IT 13 15 16 21
    CPU 0.0341 0.0080 0.0060 0.0099
    4×4 RES 3.1126e-15 6.6461e-16 3.7661e-15 2.9099e-15
    ηA,b(˜u) 1.5922e-16 4.9428e-17 3.2641e-16 2.5155e-16
    ηS(˜u) 4.8855e-16 2.5989e-16 1.8577e-15 8.8886e-16
    IT 31 22 32 29
    CPU 0.0718 0.0268 0.0367 0.0345
    8×8 RES 9.7311e-15 2.2431e-15 9.5202e-15 7.3532e-15
    ηA,b(˜u) 1.5364e-16 4.5875e-17 1.3954e-16 8.6810e-17
    ηS(˜u) 3.2157e-15 3.7135e-16 2.9603e-15 8.4309e-16

     | Show Table
    DownLoad: CSV
    Table 4.  Preconditioned GMRES methods numerical results for the Oseen problem with v=0.01.
    Grids RBULT MRS GRS MDS
    IT 13 15 16 21
    CPU 0.0337 0.0109 0.0067 0.0119
    4×4 RES 5.0086e-15 6.8143e-16 4.5317e-15 2.8925e-15
    ηA,b(˜u) 2.6048e-16 6.5806e-17 3.6459e-16 2.5462e-16
    ηS(˜u) 7.4883e-16 3.1987e-16 2.0524e-15 9.1407e-16
    IT 31 22 32 29
    CPU 0.0737 0.0237 0.0315 0.0344
    8×8 RES 9.7263e-15 2.0932e-15 9.6089e-15 7.1474e-15
    ηA,b(˜u) 1.5504e-16 3.8421e-17 1.4355e-16 9.0864e-17
    ηS(˜u) 3.2365e-15 3.1490e-16 2.9856e-15 8.6299e-16

     | Show Table
    DownLoad: CSV

    In this paper, we discussed the structured backward error analysis for the generalized saddle point problems arising from the incompressible Navier-Stokes equations and obtained the explicit expressions of the structured backward error. The structured backward error may be much large than the relative unstructured backward error which make it more suitable for assessing the validity and practicability of the numerical algorithms for solving the structured linear system (1.1). In addition, we also presented some test examples to examine the accuracy (strongly stability) and effectiveness of some existing preconditioned GMRES methods by the structured backward error.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors would like to express their gratitude to the anonymous referees for their detailed and helpful suggestions that substantially improved the manuscript. This work was partially supported by Research ability cultivation fund of HUAS(Hubei University of Arts and Science) (Nos. 2021kpgpzk04, 2020kypytd006) and Excellent young and middle-aged science and Technology Innovation team project of the Education Department of Hubei Province (T2022029).

    The authors disclosed no conflicts of interest in publishing this paper.



    [1] M. Benzi, X. P. Guo, A dimensional split preconditioner for Stokes and linearized Navier-Stokes equations, Appl. Numer. Math., 61 (2011), 66–76.
    [2] M. Benzi, M. K. Ng, Q. Niu, Z. Wang, A relaxed dimensional factorization preconditioner for the incompressible Navier-Stokes equations, J. Comput. Phys., 230 (2011), 6185–6202.
    [3] J. R. Bunch, W. James Demmel, C. F. Van Loan, The strong stability of algorithms for solving symmetric linear systems, SIAM J. Matrix Anal. Appl., 10 (1989), 494–499. https://doi.org/10.1137/0610035 doi: 10.1137/0610035
    [4] Y. Cao, S. X. Miao, Y. S. Cui, A relaxed splitting preconditioner for genenralized saddle point problems, Comput. Appl. Math., 34 (2015), 865–879. https://doi.org/10.1007/s40314-014-0150-y doi: 10.1007/s40314-014-0150-y
    [5] Y. Cao, L. Q. Yao, M. Q. Jiang, A modified dimensional split preconditioner for generalized saddle point problems, J. Comput. Appl. Math., 250 (2013), 70–82. https://doi.org/10.1016/j.cam.2013.02.017 doi: 10.1016/j.cam.2013.02.017
    [6] X. S. Chen, W. Li, X. Chen, J. Liu, Structured backward errors for generalized saddle point systems, Linear Algebra Appl., 436 (2012), 3109–3119. https://doi.org/10.1016/j.laa.2011.10.012 doi: 10.1016/j.laa.2011.10.012
    [7] G. Cheng, J. C. Li, A relaxed upper and lower triangular splitting preconditioner for the linearized Navier–Stokes equation, Comput. Math. Appl., 80 (2020), 43–60. https://doi.org/10.1016/j.camwa.2020.02.025 doi: 10.1016/j.camwa.2020.02.025
    [8] H. C. Elman, A. Ramage, D. J. Silvester, Algorithm 866: IFISS, a Matlab toolbox for modelling incompressible flow, ACM Trans. Math. Software, 33 (2007), 14. https://doi.org/10.1145/1236463.1236469 doi: 10.1145/1236463.1236469
    [9] H. C. Elman, D. J. Silvester, A. J. Wathen, Finite Elements and Fast Iterative Solvers: with Applications in Incompressible Fluid Dynamics, Oxford: Oxford University Press, 2014.
    [10] H. T. Fan, X. Y. Zhu, A modified relaxed splitting preconditioner for generalized saddle point problems from the incompressible Navier-Stokes equations, Appl. Math. Lett., 55 (2016), 18–26. https://doi.org/10.1016/j.aml.2015.11.011 doi: 10.1016/j.aml.2015.11.011
    [11] G. H. Golub, C. F. Van Loan, Matrix Computations, Baltimore: The Johns Hopkins University Press, 2013.
    [12] A. Graham, Kronecker Products and Matrix Calculus with Application, New York: Wiley, 1981.
    [13] N. J. Higham, Accuracy and Stability of Numerical Algorithms, Philadelphia: SIAM, 2002. https://doi.org/10.1137/1.9780898718027
    [14] Y. M. Huang, A practical formula for computing optimal parameters in the HSS iteration methods, J. Comput. Appl. Math., 255 (2014), 142–149. https://doi.org/10.1016/j.cam.2013.01.023 doi: 10.1016/j.cam.2013.01.023
    [15] Y. F. Ke, C. F. Ma, An inexact modified relaxed splitting preconditioner for the generalized saddle point problems from the incompressible Navier-Stokes equation, Numer. Algor., 75 (2017), 1103–1121. https://doi.org/10.1007/s11075-016-0233-5 doi: 10.1007/s11075-016-0233-5
    [16] Y. J. Li, X. Y. Zhu, H. T. Fan, Relaxed block upper–lower triangular preconditioner for generalized saddle point problems from the incompressible Navier-Stokes equations, J. Comput. Appl. Math., 364 (2020), 112329. https://doi.org/10.1016/j.cam.2019.06.045 doi: 10.1016/j.cam.2019.06.045
    [17] P. Lv, B. Zheng, Structured backward error analysis for a class of block three-by-three saddle point problems, Numer. Algor., 90 (2022), 59–78. https://doi.org/10.1007/s11075-021-01179-6 doi: 10.1007/s11075-021-01179-6
    [18] L. S. Meng, Y. W. He, S. X. Miao, Structured backward errors for two kinds of generalized saddle point systems, Linear Multilinear Algebra, 70 (2022), 1345–1355. https://doi.org/10.1080/03081087.2020.1760193 doi: 10.1080/03081087.2020.1760193
    [19] J. L. Rigal, J. Gaches, On the compatibility of a given solution with the data of a linear system, J. Assoc. Comput. Mach., 14 (1967), 543–548. https://doi.org/10.1145/321406.321416 doi: 10.1145/321406.321416
    [20] J. G. Sun, Structured backward errors for KKT systems, Linear Algebra Appl., 288 (1999), 75–88. https://doi.org/10.1016/S0024-3795(98)10184-2 doi: 10.1016/S0024-3795(98)10184-2
    [21] J. G. Sun, Matrix Perturbation Analysis, Beijing: Science Press, 2001.
    [22] N. B. Tan, T. Z. Huang, Z. J. Hu, A relaxed splitting preconditioner for the incompressible Navier–Stokes equations, J. Appl. Math., 2012 (2012), 402490. https://doi.org/10.1155/2012/402490 doi: 10.1155/2012/402490
    [23] H. Xiang, Y. M. Wei, On normwise structured backward errors for saddle point systems, SIAM J. Matrix Anal. Appl., 29 (2007), 838–849. https://doi.org/10.1137/060663684 doi: 10.1137/060663684
    [24] B. Zheng, P. Lv, Structured backward error analysis for generalized saddle point problems, Adv. Comput. Math., 46 (2020), 34. https://doi.org/10.1007/s10444-020-09787-x doi: 10.1007/s10444-020-09787-x
  • This article has been cited by:

    1. Bing Tan, Wei Ma, Structured backward errors for block three-by-three saddle point systems with Hermitian and sparsity block matrices, 2025, 25, 25900374, 100546, 10.1016/j.rinam.2025.100546
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1131) PDF downloads(75) Cited by(1)

Figures and Tables

Tables(4)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog