Loading [MathJax]/jax/element/mml/optable/MathOperators.js
Research article

Surfaces of coordinate finite II-type

  • We study the class of surfaces of revolution in the 3-dimensional Euclidean space E3 with nonvanishing Gauss curvature whose position vector x satisfies the condition ΔIIx=Ax, where A is a square matrix of order 3 and ΔII denotes the Laplace operator of the second fundamental form II of the surface. We show that a surface of revolution satisfying the preceding relation is a catenoid or part of a sphere.

    Citation: Mutaz Al-Sabbagh. Surfaces of coordinate finite II-type[J]. AIMS Mathematics, 2025, 10(3): 6258-6269. doi: 10.3934/math.2025285

    Related Papers:

    [1] Ali Hassani . Singular expansion of the wave kernel and harmonic sums on Riemannian symmetric spaces of the non-compact type. AIMS Mathematics, 2025, 10(3): 4775-4791. doi: 10.3934/math.2025219
    [2] Maliha Rashid, Amna Kalsoom, Maria Sager, Mustafa Inc, Dumitru Baleanu, Ali S. Alshomrani . Mellin transform for fractional integrals with general analytic kernel. AIMS Mathematics, 2022, 7(5): 9443-9462. doi: 10.3934/math.2022524
    [3] Tong Wu, Yong Wang . Super warped products with a semi-symmetric non-metric connection. AIMS Mathematics, 2022, 7(6): 10534-10553. doi: 10.3934/math.2022587
    [4] Shahid Mubeen, Rana Safdar Ali, Iqra Nayab, Gauhar Rahman, Thabet Abdeljawad, Kottakkaran Sooppy Nisar . Integral transforms of an extended generalized multi-index Bessel function. AIMS Mathematics, 2020, 5(6): 7531-7547. doi: 10.3934/math.2020482
    [5] Yusuf Dogru . η-Ricci-Bourguignon solitons with a semi-symmetric metric and semi-symmetric non-metric connection. AIMS Mathematics, 2023, 8(5): 11943-11952. doi: 10.3934/math.2023603
    [6] Nitu Gupta, V. R. Lakshmi Gorty . On distributional finite continuous Radon transform in certain spaces. AIMS Mathematics, 2021, 6(1): 378-389. doi: 10.3934/math.2021023
    [7] Mohamed Akel, Muajebah Hidan, Salah Boulaaras, Mohamed Abdalla . On the solutions of certain fractional kinetic matrix equations involving Hadamard fractional integrals. AIMS Mathematics, 2022, 7(8): 15520-15531. doi: 10.3934/math.2022850
    [8] Amira Ishan . On concurrent vector fields on Riemannian manifolds. AIMS Mathematics, 2023, 8(10): 25097-25103. doi: 10.3934/math.20231281
    [9] Dongjie Gao, Peiguo Zhang, Longqin Wang, Zhenlong Dai, Yonglei Fang . A novel high-order symmetric and energy-preserving continuous-stage Runge-Kutta-Nyström Fourier pseudo-spectral scheme for solving the two-dimensional nonlinear wave equation. AIMS Mathematics, 2025, 10(3): 6764-6787. doi: 10.3934/math.2025310
    [10] D. L. Suthar, A. M. Khan, A. Alaria, S. D. Purohit, J. Singh . Extended Bessel-Maitland function and its properties pertaining to integral transforms and fractional calculus. AIMS Mathematics, 2020, 5(2): 1400-1410. doi: 10.3934/math.2020096
  • We study the class of surfaces of revolution in the 3-dimensional Euclidean space E3 with nonvanishing Gauss curvature whose position vector x satisfies the condition ΔIIx=Ax, where A is a square matrix of order 3 and ΔII denotes the Laplace operator of the second fundamental form II of the surface. We show that a surface of revolution satisfying the preceding relation is a catenoid or part of a sphere.



    Consider the large and sparse saddle-point problem Ax=b of form:

    Ax=(ABBT0)(uv)=(fg), (1.1)

    where ARn×n is nonsymmetric positive definite, in other words, its symmetric part H=12(A+AT) is positive definite, BRn×m(nm) is a full column rank matrix, fRn and gRm are given vectors, and BT denotes the transpose of B. Under these assumptions, the nonsingularity of A assures that the existence and uniqueness of the saddle problem (1.1). When the matrices A and B are large and sparse, this class of linear system of the form (1.1) arises in many problems of scientific computing and engineering applications, including computational fluid dynamics, optimization, constrained least squares problems, generalized least squares problems, incompressible flow problems, mixed finite element approximation of elliptic PDEs and so forth. See [1] and references therein for more examples and additional information. In addition, we also refer to [2] for periodic Sylvester matrix equations and generalized coupled Sylvester matrix equations.

    In general, the coefficient matrix A of (1.1) is usually large, sparse and extremely ill-conditioned, it is a common belief that iteration methods become more attractive than direct methods in terms of storage requirements and computing time. As is well known, a clustered spectrum of preconditioned matrix often results in rapid convergence for Krylov subspace methods. Therefore, in light of the special structure of the saddle point problem (1.1), for the sake of achieving rapid convergence rate and improving computational efficiency, a lot of effective and practical preconditioning techniques have been investigated in the literatures, such as preconditioned Krylov subspace methods [3], SOR-type methods [4,5], Uzawa-type methods [6,7], HSS-type methods and its accelerated variants [8,9,10], and shift-splitting (denoted by SS) and generalized shift-splitting (denoted by GSS) iteration methods [11,12,13], and so forth.

    In the past few years, many (GSS-)SS-type iteration methods have been proposed to greatly improve convergence rate by choosing the appropriate iteration parameters. Based on the theory of the shifting and clustered spectrum of the coefficient matrix, Bai and Zhang [12] presented a regularized conjugate gradient method for symmetric positive definite system. In [13], Bai et al. investigated a kind of shift-splitting iteration method for solving large and sparse non-Hermitian positive definite system. A modification of the SS preconditioner (i.e., the GSS preconditioner) has been established in [14,15] with a symmetric positive definite (1, 1)-block sub-matrix A for the positive iteration parameters α and β. Here, the GSS iteration method is defined as

    12(αIn+ABBTβIm)xk+1=12(αInABBTβIm)xk+(fg), (1.2)

    where the related GSS preconditioner is given by

    PGSS=12(αIn+ABBTβIm), (1.3)

    with Iq denoting the identity matrix of size q. Particularly, as α=β, the GSS iteration method (1.2) reduces to the SS iteration method [16], where the SS preconditioner is given by the following form

    PSS=12(αIn+ABBTαIm). (1.4)

    For solving the saddle problem (1.1), Cao et al. [17] applied the GSS iteration method (1.2) to solve the saddle point system (1.1) and proposed a deteriorated shift-splitting (DSS) preconditioner by removing the shift parameter α in the (1,1)-block of the GSS preconditioner. Based on the symmetric and skew-symmetric splitting A=H+S with S=12(AAT) of the (1,1)-block sub-matrix A, Zhou et al. [18] proposed a modified shift-splitting (denoted by MSS) preconditioner of form

    PMSS=12(αIn+2HBBTαIm), (1.5)

    and proposed a sufficient condition on the iteration parameter α for the convergence of the MSS iteration method. In [19], Huang et al. proposed a generalized modified shift-splitting iteration method (denoted by GMSS) by introducing two iteration parameters α and β instead of one parameter α, which further improve the convergence rate of the MSS iteration method, where the corresponding GMSS preconditioner is given by

    PGMSS=12(αIn+2HBBTβIm). (1.6)

    The aim of this paper is to propose an improvement for the (GSS-)SS-type iteration methods and investigate the SFHSS preconditioner for Krylov subspace methods, such as GMRES iteration method [20], which significantly accelerate the convergence speed of the related iterative method. According to theoretical analysis, when the iteration parameters α and β satisfy certain conditions, we prove that the SFHSS iteration method converges to the unique solution of the saddle point system (1.1). The rest of this paper is organized as follows. In Section 2, we review the SFHSS iteration method and its implementations. In Section 3, we analysis the convergent property of the SFHSS iteration method for nonsingular saddle point problems with nonsymmetric positive definite (1, 1)-block. In Section 4, we discuss the algebraic properties of the resulted SFHSS preconditioned matrix. Finally, numerical experiments arise from finite element discretization of the Oseen problem are presented in Section 5 to show the correctness of the theoretical analysis and the feasibility of the SFHSS iteration method.

    The main purpose of this section is to introduce the new GSS-type iteration method (i.e., the SFHSS iteration method) for the nonsingular saddle-point problems (1.1). Based on the local Hermitian and skew-Hermitian splitting scheme of the non-Hermitian positive definite (1, 1)-block sub-matrix, Yang and Wu [6] presented the Uzawa-HSS iteration method. Let A=H+S be the symmetric and skew-symmetric splitting of the (1, 1)-block of the coefficient matrix A defined as in (1.1) with S=12(AAT). Following the idea of the Uzawa-HSS iteration method, we make the following matrix splitting for the coefficient matrix A of the linear system (1.1):

    A=PSFHSSNSFHSS,

    where the shift-HSS preconditioner PSFHSS is given by

    PSFHSS=14(1α(αIn+2H)(αIn+2S)2B2BTβBTB) (2.1)

    and the matrix NSFHSS is defined as

    NSFHSS=14(1α(αIn2H)(αIn2S)2B2BTβBTB).

    On the basis of the above splitting, we derive the following SFHSS iteration method

    PSFHSSxk+1=NSFHSSxk+b, (2.2)

    and the generalized shift-HSS iteration matrix τ(α,β) can be constructed as follows

    τ(α,β)=P1SFHSSNSFHSS. (2.3)

    Denoted by

    Θ1=1α(αIn+2H)(αIn+2S),Θ2=1α(αIn2H)(αIn2S),
    Φ1=βBTB+4BTΘ11B,Φ2=Θ1+4B(βBTB)1BT.

    More precisely, multiply both sides of (2.2) from the left by the matrices

    (In2B(βBTB)10Im)and(In02BTΘ11Im),

    respectively, then we can easily obtain the explicit formulations of u(k+1) and v(k+1), respectively. Therefore, the iteration scheme (2.2) naturally leads to the following algorithmic description of the SFHSS iteration method:

    Method 2.1. (The SFHSS method) Given an initial guess x0=[u(0)T,v(0)T]T, for k=0,1,2,, until the iteration sequence [u(k),v(k)] converges,

    (i) Compute u(k) from

    u(k+1)=u(k)4Φ12[A+2B(βBTB)1BT]u(k)4Φ12Bv(k)+4Φ12[f2B(βBTB)1g],

    (ii) Compute v(k) from

    v(k+1)=v(k)+2Φ11BT(In+Θ11Θ2)u(k)8Φ11BTΘ11Bv(k)+4Φ11(g+2BTΘ11f).

    Following the iteration scheme (2.2) again, it is not difficult to know that the SFHSS iteration method can be written as the following fixed point form

    xk+1=τ(α,β)xk+4P1SFHSSb. (2.4)

    It is easy to see that at each step of the SFHSS iteration method (2.4) or the preconditioned Krylov subspace method, such as GMRES iteration method, we need to solve a linear system PSFHSSz=r, i.e.,

    14(1α(αIn+2H)(αIn+2S)2B2BTβBTB)(z1z2)=(r1r2), (2.5)

    where z1,r1Rn and z2,r2Rm. A brisk calculation confirms that

    PSFHSS=14(In2B(βBTB)10Im)×(Φ200βBTB)(In02(βBTB)1BTIm). (2.6)

    From the decomposition of PSFHSS in (2.6), it is obvious that

    (z1z2)=4(In02(βBTB)1BTIm)(Φ1200(βBTB)1)×(In2B(βBTB)10Im)(r1r2). (2.7)

    Then we have the following algorithmic implementation of the SFHSS iteration method.

    Algorithm 2.1. For a given vector (rT1,rT2)T, the vector (zT1,zT2)T can be derived from (2.7) by the following steps:

    (1) solve BTBt=1βr2.

    (2) solve Φ2z1=4(r12Bt);

    (3) solve BTBp=2βBTz1.

    (4) compute z2=p+4t.

    Since BTB is symmetric positive definite and (αIn+2H)(αIn+2S)+4B(βBTB)1BT is nonsymmetric positive definite, then in practical implementations, the sub-linear systems BTBz=r can be solved by the Cholesky factorization and [1α(αIn+2H)(αIn+2S)+4B(βBTB)1BT]z=r contained in the SFHSS iteration method can be solved inexactly by the sparse ILU factorization or gmres iteration method.

    Consider the solution of a linear system of form Ax=b. To a matrix AR(n+m)×(n+m), if M is nonsingular, then the representation A=MN is called as a splitting. Assume that the iteration matrix T=M1N and c=M1b, then the stationary iteration scheme of the linear system Ax=b is defined as

    xk+1=Txk+c. (3.1)

    As is well known, if the spectral radius ρ(T) of the iteration matrix T is less than one, then the stationary iteration scheme (3.1) converges to the unique solution of Ax=b, for any choice of the initial vector x0.

    Let λ be an eigenvalue of the iteration matrix τ(α,β) in (2.3) and xT=(u,v) be the corresponding eigenvector. To study the convergence property of the generalized shift-HSS iteration method (2.4), we need to consider the following generalized eigenvalue problem:

    (1α(αIn2H)(αIn2S)2B2BTβBTB)(uv)=λ(1α(αIn+2H)(αIn+2S)2B2BTβBTB)(uv), (3.2)

    By straightforward computation, the generalized eigenvalue problem (3.2) is equivalent to the following form

    {1α(αIn2H)(αIn2S)u2Bv=1αλ(αIn+2H)(αIn+2S)u+2λBv,2BTu+βBTBv=2λBTu+βλBTBv. (3.3)

    For convenience, we denote

    uAuuu=a+bi,uHSuuu=c+dianduB(BTB)1BTuuu=e, (3.4)

    where a>0, e0,

    c=12u(HSSH)uuuandd=12iu(HS+SH)uuu.

    In what follows, we give the following lemmas to verify the convergence of the SFHSS iteration method (2.4).

    Lemma 3.1. ([21]). If S is a skew-Hermitian matrix, then iS is a Hermitian matrix and uSu is a purely imaginary number or zero for all uCn.

    Lemma 3.2. ([22]). Both roots of the complex quadratic equation λ2Φλ+Ψ=0 have modulus less than one if and only if |Φ¯ΦΨ|+|Ψ|2<1, where ¯Φ denotes the conjugate complex of Φ.

    Lemma 3.3. Let A be nonsymmetric positive definite and B have full column rank. Assume λ is an eigenvalue of the iteration matrix τ(α,β) defined as in (2.3) with β>0, if α>0, then λ1, and if α2>4|c|max with |c|max denoting the maximum of c, then λ1.

    Proof. Following the spirit of the proof of [17]. If λ=1, then it is straightforward to show that the Eq (3.2) yield the following result

    (ABBT0)(uv)=(00).

    Since A is nonsymmetric positive definite and B has full column rank, then we can easily know that u=0 and v=0. This is a contradiction as (u,v) is an eigenvector.

    If λ=1, then the Eq (3.2) reduce to the following forms

    {(α2In+4HS)u=0,βBTBv=0.

    If α2>4|c|max, we can easily know that α2In+4HS is nonsingular. Therefore, it is easy to see that u=0 and v=0. The result seems to contradict with (u,v) being an eigenvector.

    Thus, we complete the proof.

    Lemma 3.4. Let the conditions of Lemma 3.3 be satisfied. Assume that λ is an eigenvalue of the iteration matrix τ(α,β) defined as in (2.3) and (u,v) is the corresponding eigenvector with uCn and vCm, if 0u(BT), then |λ|<1.

    Proof. We demonstrate the verification of u0. Unless, if u=0, then it follows from the second of the Eq (3.2) that α(λ1)BTBv=0. According to Lemma 3.3, since λ1, then BTBv=0. As B has full column rank, then BTB is nonsingular. Therefore, we further conclude v=0. This is a contradiction since (u,v) is an eigenvector, so u0.

    We now turn to verify |λ|<1. Assume u(BT) with u2=1, from the second of the Eq (3.2), we get v=0. Following [8, Theorem 2.2] and multiplying the first of the Eq (3.2) from the left-hand side by u, it is obvious that

    |λ|=|u(αIn+2S)1(αIn+2H)1(αIn2H)(αIn2S)u|≤∥(αIn+2S)1(αIn+2H)1(αIn2H)(αIn2S)2≤∥(αIn2H)(αIn+2H)12=maxi|α2λi(H)α+2λi(H)|<1,

    where λi(H) denotes the ith eigenvalue of the symmetric positive definite matrix H.

    Therefore, the proof is completed.

    Theorem 3.1. Let the conditions of Lemma 3.3 be satisfied. Assume that λ is an eigenvalue of the iteration matrix τ(α,β) defined as in (2.3), if the positive iteration parameters α and β satisfy the following conditions

    If ac+bd0, then

    α2>4|c|maxandβ>16d2eα3a2,

    and if ac+bd<0, then

    α2>max{4|ac+bd|maxa,4|c|max}andβ>16d2eαa[α2a|4ac+4bd|].

    then the iteration method (2.4) converges to the unique solution of the nonsymmetric saddle point problem (1.1), i.e.,

    |λ|<1.

    Proof. Combine Lemmas 3.3 and 3.4, in order to complete the proof, we need only to verify the case BTu0. Suppose u(BT), then the second of the Eq (3.2) yields the following result

    v=2(λ+1)β(λ1)(BTB)1BTu. (3.5)

    By substituting the relationship (3.5) into the first of the Eq (3.2), we have

    (αIn2H)(αIn2S)u=λ(αIn+2H)(αIn+2S)u+4α(λ+1)2β(λ1)B(BTB)1BTu. (3.6)

    Multiplying the Eq (3.6) from the left-hand side by u, after straightforward calculations, then the Eq (3.6) yields the following form

    α2β(λ1)2+2αβ(λ21)uAuuu+4β(λ1)2uHSuuu+4α(λ+1)2uB(BTB)1BTuuu=0. (3.7)

    Following (3.4), a quadratic equation of λ is derived from the Eq (3.7). After some algebra, it is straightforward to show that

    [(α2+2αa+4c)β+4αe+2(αb+2d)βi]λ2+2(4αeα2β4βc4βdi)λ+α2β2αβa+4βc+4αe+2β(2dαb)i=0. (3.8)

    If (α2+2αa+4c)β+4αe+2(αb+2d)βi=0, then it is easy to see that (α2+2αa+4c)β+4αe=0 and αb+2d=0. Therefore, the Eq (3.8) yields

    λ=α2β2αβa+4βc+4αe+2β(2dαb)i2(4αeα2β4βc4βdi)=βa+βbi4e+βa+βbi.

    By Lemma 3.3, we get λ±1 as α2>4|c|max. Note that a>0 and e0, we have

    |λ|=(βa)2+(βb)2(4e+βa)2+(βb)2<1.

    In what follows, we consider the case (α2+2αa+4c)β+4αe+2(αb+2d)βi0. From lemma 3.2, we know that |λ|<1 if and only if |Φ¯ΦΨ|+|Ψ|2<1. For convenience, we denote Φ and Ψ by

    Φ=2(4αeα2β4βc4βdi)(α2+2αa+4c)β+4αe+2(αb+2d)βi

    and

    Ψ=α2β2αβa+4βc+4αe+2β(2dαb)i(α2+2αa+4c)β+4αe+2(αb+2d)βi.

    After straightforward computation, we have

    |Φ¯ΦΨ|+|Ψ|2=8αβΓ+(16de)2+Υ+4β2(2dαb)2(α2β+2αβa+4βc+4αe)2+4β2(2d+αb)2,

    where Γ=(4αaeα2βa4βac4βbd)2 and Υ=(α2β2αβa+4βc+4αe)2.

    The following inequality

    |Φ¯ΦΨ|+|Ψ|2<8αβΓ+16αae(α2βa+4βac+4βbd)+Υ+4β2(2dαb)2(α2β+2αβa+4βc+4αe)2+4β2(2d+αb)2=8αβ(4αae+α2βa+4βac+4βbd)+Υ+4β2(2dαb)2(α2β+2αβa+4βc+4αe)2+4β2(2d+αb)2=1

    holds true for this case

    16αae(α2βa+4βac+4βbd)>(16de)2. (3.9)

    This implies

    α2a+4ac+4bd>0. (3.10)

    Following the inequalities (3.9) and (3.10), if ac+bd0, we have

    α2>0>4(ac+bd)a,

    and

    β>16d2eα3a216d2eαa[α2a+(4ac+4bd)].

    If ac+bd<0, then we get

    α2>4|ac+bd|maxa4(ac+bd)a>0,

    and

    β>16d2eαa[α2a|4ac+4bd|]16d2eαa[α2a+(4ac+4bd)].

    By making use of α2>4|c|max, we complete the proof.

    For the sake of convenience, we denote the maxima of |bd|, |d| and e by |bd|max, |d|max and emax, respectively, and denote the minimums of a, b, c and d by amin, bmin, cmin, and dmin, respectively. According to Theorem 3.1, we give the following sufficient conditions for the convergence of the SFHSS iteration method (2.4).

    Corollary 3.1. Let the conditions of Theorem 3.1 be satisfied. If the positive iteration parameters α and β satisfy:

    If ac+bd0, then

    α>2|c|max,andβ>16|d|2maxemaxα3a2min.

    If ac+bd<0, then

    α2>2|c|max+|bd|maxamin,

    and

    β>16d2eαamin[α2amin4|ac+bd|max].

    Then the generalized shift-HSS iteration method (2.4) converges to the unique solution of the nonsymmetric saddle point problem (1.1).

    Proof. According to Theorem 3.1, if ac+bd0, then

    α2>4|c|max,β>16|d|2maxemaxα3a2min16d2eα3a2,

    and if ac+bd<0, then

    α2>4|c|max+|bd|maxamax{4|ac+bd|maxa,4|c|max},

    and

    β>16d2eαamin[α2amin|4ac+4bd|max]16d2eαa[α2a(4ac+4bd)].

    Therefore, we complete the proof.

    The main objective of this section is to introduce some elegant inclusion regions for the spectrum of P1SFHSSA for the saddle point problem (1.1).

    In the following, to derive some related bounds of the eigenvalues of the preconditioned saddle point matrix P1SFHSSA, we study the eigenvalue problem P1SFHSSAx=ηx, that is to say

    Ax=ηPSFHSSx, (4.1)

    where η denotes an any eigenvalue of the preconditioned matrix P1SFHSSA with the corresponding eigenvector x=(u,v).

    For simplicity, we denote vBTBv by σ2 and the null space of BT by (BT), at the same time, the matrix RSFHSS is defined by

    RSFHSS=14(αIn+4αHS00βBTB),

    then it is easy to see that

    PSFHSS=RSFHSS+12A. (4.2)

    After some algebra, we can rewrite the generalized eigenvalue problem (4.1) as

    (1η2)Ax=ηRSFHSSx. (4.3)

    Following Lemma 3.3, since the eigenvalue λ=1η of τ(α,β) satisfies λ1 with α2>4|c|max, then η=1λ2. So, 112η0, we set

    θ=2η2η,for whichη=2θθ+2=24θ+2.

    For convenience, we use (θ) and (θ) to denote the real part and image part of the eigenvalue θ, respectively.

    We can explicitly write the equivalent eigenproblem Ax=θRSFHSSx as

    (ABBT0)(uv)=θ4(αIn+4αHS00βBTB)(uv). (4.4)

    The equivalent results to Eq (4.4) are given by

    {Au+Bv=θ(α4In+1αHS)u,BTu=14βθBTBv. (4.5)

    It is obvious that u0, otherwise the second equation of (4.5) would implies θ=0 or v=0. However, from Lemma 3.3, neither of them can be satisfied. So, u0. If v=0 and α2>4|c|max, then

    Theorem 4.1. Let the conditions of Lemma 3.3 be satisfied. Assume (η,x) is an eigenpair of (4.1) with x=(u,v) and u2=1. Then for α,β>0, the eigenvalue η can be written as η=2θθ+2, where θ satisfies the following:

    (i) If v=0, then u(BT) and

    4α|λ(A)|minα2+8αcmax+16ρ2(HS)|θ|4αρ(A)α2+8αcmin+16|λ(HS)|2min, (4.6)

    with

    (θ)=4α(α2a+4ac+4bd)(α2+4c)2+16d2and(θ)=4α(α2b+4bc4ad)(α2+4c)2+16d2. (4.7)

    (ii) If v0, then u(BT) and

    4α(α2+4cmin)|λ(A)|minΘ(α,β)<|θ|4αρ(A)Π(α,β)Υ(α,β). (4.8)

    with

    (θ)=4α(α2a+4ac+4bdαβaσ2)(α2+4c)2(αβσ2)2+16d2,(θ)=4α(α2b+4bc+αβbσ24ad)(α2+4c)2(αβσ2)2+16d2, (4.9)

    and

    Π(α,β)=α2(α2+8cmax)+16ρ2(HS)+2(α2+4cmax)αβσ2max+(αβσ2max)2,ω1(α,β)=α2(α2+8cmin)+16|λ(HS)|2min(αβσ2max)2,
    ω2(α,β)=(αβσ2min)2α2(α2+8cmax)16ρ2(HS),Υ(α,β)=min{ω1(α,β),ω2(α,β)},Θ(α,β)=α2(α2+8cmin)+16ρ2(HS)+(αβσ2max)2.

    Proof. In order to obtain the inequalities (4.6) and (4.8), we need to consider two cases: (i) v=0, (ii) v0.

    We now turn to verify (i). If v=0, from (4.5), it is easy to see that

    Au=θ(α4In+1αHS)u. (4.10)

    Multiplying u to the two sides of (4.10) from left, it then from (3.4) that

    θ=4α(a+bi)(α2+4c)+4di=4α(α2a+4ac+4bd)+4α(α2b+4bc4ad)i(α2+4c)2+16d2. (4.11)

    Consequently, we obtain (4.7). After some algebra, it is straightforward to show that

    |θ|=2(θ)+2(θ)=4αa2+b2(α2+4c)2+16d2. (4.12)

    By straightforward calculation, we can get the inequality (4.6).

    We demonstrate the validity of (ii). If v0, multiplying by u from left, then the first of the Eq (4.5) yields

    uAu+uBv=θu(α4In+1αHS)u. (4.13)

    Multiplying the transposed conjugate of the second of Eq (4.5) by v, we get

    uBv=14β¯θvBTBv. (4.14)

    Substituting (4.14) into (4.13), we obtain

    4αuAu=α2θ+4θuHSu+αβ¯θvBTBv. (4.15)

    Following the above notes, it will be shown that

    4α(a+bi)=α2θ+4θ(c+di)+αβ¯θσ2. (4.16)

    It is obvious to obtain that

    (α2+4c+αβσ2)(θ)4d(θ)=4αa,

    and

    (α2+4cαβσ2)(θ)+4d(θ)=4αb.

    Through direct calculations, we get (4.9) and

    |θ|=2(θ)+2(θ)=4αφ(α,β)+ψ(α,β)(α2+4c)2(αβσ2)2+(4d)2, (4.17)

    where

    φ(α,β)=(a2+b2)[(α2+4c)2+(4d)2+(αβσ2)2],

    and

    ψ(α,β)=2[(b2a2)(α2+4c)8abd]αβσ2.

    Since α2a+4ac+4bd>0, then we have

    φ(α,β)+ψ(α,β)<φ(α,β)+2(b2a2)(α2+4c)αβσ2+4αβσ2a2(α2+4c)=φ(α,β)+2(b2+a2)(α2+4c)αβσ2=(a2+b2)[(α2+4c+αβσ2)2+(4d)2]. (4.18)

    Consider

    (α2+4c)2+(4d)2>(αβσ2)2,

    or

    (α2+4c)2+(4d)2<(αβσ2)2.

    By straightforward computation, we can find that

    |θ|<(a2+b2)[(α2+4c+αβσ2)2+(4d)2]|(α2+4c)2(αβσ2)2+16d2|<4α(a2+b2)Π(α,β)Υ(α,β). (4.19)

    Additionally, as ab, use α2a+4ac+4bd>0 again, we have

    φ(α,β)+ψ(α,β)>φ(α,β)+2[(b2a2)(4bda)8abd]αβσ2=(a2+b2)[(α2+4c)2+(4d)2+(αβσ2)28(ba)dαβσ2](a2+b2)[(α2+4c)2+(4d)2+(αβσ2)28dαβσ2](a2+b2)(α2+4c)2. (4.20)

    As a<b, then we obtain

    φ(α,β)+ψ(α,β)>φ(α,β)16a2dαβσ2>(a2+b2)(α2+4c)2+2a2[(4d)2+(αβσ2)28dαβσ2]>(a2+b2)(α2+4c)2. (4.21)

    Hence, combine (4.20) and (4.21), we have

    |θ|>(a2+b2)(α2+4c)(α2+4c)2+(αβσ2)2+(4d)24α|λ(A)|min(α2+4cmin)Θ(α,β). (4.22)

    Hence, we complete the proof.

    Remark 4.1. Following Theorem 4.1, the eigenvalue θ satisfies two cases:

    (i) If v=0 and α2b+4bc=4ad, then u(BT) and the real eigenvalue

    θ=aα2+4c>0

    is bounded by

    aminα2+4cmax<θ<amaxα2+4cmin.

    (ii) If v0 and α2b+4bc+αβbσ2=4ad, then u(BT) and the real eigenvalue

    θ=aα2+4c+αβσ2>0

    is bounded as

    aminα2+4cmax+αβσ2max<θ<amaxα2+4cmin+αβσ2min.

    Theorem 4.2. Let the conditions of Theorem 4.1 be satisfied. For any iteration parameters α,β>0, then the eigenvalue η of the SFHSS preconditioned matrix P1SFHSSA satisfies

    2|θ|min|θ|min+2|η|<2|θ|max|θ|2max+4. (4.23)

    Proof. For any iteration parameters α,β>0, since |θ|=2(θ)+2(θ), then we get 0<(θ)|θ|. From η=2θθ+2, it is easy to see that

    |η|=2|θ||θ|2+4(θ)+4.

    Hence, we have

    2|θ||θ|+2|η|<2|θ||θ|2+4.

    Together the monotone properties of 2|θ||θ|+2 and 2|θ||θ|2+4 with respect to |θ|, we complete the proof of Theorem 4.2.

    Combine Theorems 4.1 and 4.2, we can find the following result.

    Remark 4.2. Combine Theorems 4.1 and 4.2, for α,β>0, some refined bounds for the eigenvalue η of the SFHSS preconditioned matrix P1SFHSSA are given by

    (i) If v=0, then u(BT) and

    4α|λ(A)|min2α|λ(A)|min+α2+8αcmax+16ρ2(HS)|η|4αρ(A)F(α),

    where

    F(α)=4α2ρ2(A)+α2+8αcmin+16|λ(HS)|2min.

    (ii) If v0, then

    4α|λ(A)|min(α2+4cmin)2α|λ(A)|min(α2+4cmin)+Θ(α,β)<|θ|4αρ(A)Π(α,β)4[αρ(A)Π(α,β)]2+Υ2(α,β).

    Remark 4.3. Since η=2θθ+2, by simple algebra, it is easy to see that η is real if and only if (θ)=0. Following Remark 4.1, the real eigenvalue η satisfies two cases of forms:

    (i) If v=0 and α2b+4bc=4ad, then u(BT) and the real eigenvalue

    η=2aa+2α2+8c>0

    meets the following inequality

    2aminamin+2α2+8cmax<η<2amaxamax+2α2+8cmin.

    (ii) If v0 and α2b+4bc+αβbσ2=4ad, then u(BT) and the real eigenvalue

    η=2aa+2α2+8c+2αβσ2>0

    is bounded by

    2aminamin+2α2+8cmax+2αβσ2max<η<2amaxamax+2α2+8cmin+2αβσ2min.

    In the following, based on the above descriptions, I will further discuss the algebraic properties of the preconditioned matrix P1SFHSSA, where the preconditioner Pα,0 is a reduced form of (2.1) with β=0. For simplicity of description, we denote PSFHSS and NSFHSS with α2>4|c|max and β=0 by Pα,0 and Nα,0, respectively. For more details on the the algebraic properties of the preconditioned matrix, we refer to [23,24,25].

    Theorem 4.3. Let A be nonsymmetric positive definite and B have full column rank. Then the preconditioned saddle point matrix P1α,0A has an eigenvalue η=2 with algebraic multiplicity at least m and the remaining eigenvalues are ηj=4α[λj(H)+λj(S)][α+2λj(H)][α+2λj(S)](j=1,2,,n), where λj(H) and λj(S) denote the jth eigenvalue of H and S, respectively.

    Proof. If β=0, according to the second of the Eq (3.2), we can obtain

    (λ+1)BTu=0,

    then we further get either λ+1=0 or BTu=0. If λ=1, i.e. η=1λ=2, by the first of the Eq (3.2), we have

    (α2In+4HS)u=0.

    Since α2In+4HS is nonsingular with α2>4|c|max, then it is easy to obtain the related eigenvectors have the form of [0vTl](l=1,2,,m). If BTu=0, use the first of the Eq (3.2) again, we obtain

    λ=u(αIn2H)(αIn2S)uu(αIn+2H)(αIn+2S)u=[α2λ(H)][α2λ(S)][α+2λ(H)][αIn+2λ(S)],

    therefore, we further get

    η=1λ=4α[λ(H)+λ(S)][α+2λ(H)][α+2λ(S)].

    Hence, we complete the proof of Theorem 4.3.

    Remark 4.4. Following Theorem 4.3, it is easy to know that the preconditioned matrix P1α,0A has m+j(1jn) linearly independent eigenvectors, where

    (i) m linearly independent eigenvectors related to the eigenvalue 2 have the form of [0vTl](l=1,2,,m).

    (ii) j(j=1,2,,n) linearly independent eigenvectors associated with eigenvalues unequal to 2 have the form [uTsvTs](s=1,2,,j) with BTu=0.

    In what follows, we devote to study the properties of the minimal polynomial for the preconditioned matrix P1α,0A. which are beneficial to the Krylov subspace acceleration. To derive an expression for the corresponding characteristic polynomial of P1α,0A, we decompose once again the preconditioner Pα,0 as

    Pα,0=14(In02BTF1Im)(F004BTF1B)(In2F1B0Im),

    where

    F=1α(αIn+2H)(αIn+2S).

    It is obvious that

    P1α,0=4(In2F1B0Im)(F100(4BTF1B)1)(In02BTF1Im).

    A simple computation reveals that

    P1α,0A=In+mP1α,0Nα,0=In+m(In2F1B0Im)(F100(4BTF1B)1)×(In02BTF1Im)(G2B2BT0)=In+m(F1G2F1BD0DIm)=(InF1G+F1BD0D2Im),

    where

    G=1α(αIn2H)(αIn2S),

    and

    D=(BTF1B)1(BTF1G+BT).

    Since BTu=0, then for j=1,2,,n, we can get

    ηj(InF1G+2F1BD)=ηj(InF1G)=4α[λj(H)+λj(S)][α+2λj(H)][α+2λj(S)].

    Then the characteristic polynomial of P1α,0A is described as follows

    ΦP1α,0A=det(ηIn+mP1α,0A)=(η2)mnj=1(ηηj).

    Denote by

    Ψ(η)=(η2)mnj=1(ηηj).

    It is straightforward to show that Ψ(η) is a polynomial related to η of degree n+1. Then a simple computation reveals that

    Ψ(P1α,0A)=(P1α,0A2In+m)mnj=1(P1α,0AηjIn+m)=((ΘIn)nj=1(ΘηjIn)0Dnj=1(ΘηjIn)0),

    where Θ=InF1G+F1BD.

    Since ηj(j=1,2,,n) are the eigenvalues of the matrix Θ, following the spirit of the Hamilton-Cayley theorem, it is easy to see that nj=1(ΘηjIn), this leads to Ψ(P1α,0A)=0.

    The following conclusion is direct consequence of the above statements and therefore its proof is omitted.

    Theorem 4.4. Under the assumptions of Theorem 4.3, if P1α,0A has k(1kn) distinct eigenvalues ηj(1jk) related to algebraic multiplicity γj with kj=1=n, respectively, then the degree of the minimal polynomial of the preconditioned matrix P1α,0A is at most k+1(1kn). Thus, the dimension of the Krylov subspace K(P1α,0A,b) is at most k+1(1kn).

    Next, we restrict our attentions to the determination of the optimal parameters problem. It is easy to see that the performance of the PSFHSS preconditioner largely depends on the choices of parameters α and β. However, in our experience this is a difficult task to select the optimal parameters, therefore, we usually need to investigate the estimation method in practical implementations.

    By taking similar steps to those taken in [26], we are ready to consider the choice of the parameters α and β in the SFHSS iteration methods. In order to obtain the fast convergence rate of the SFHSS iteration method (2.2) and clustered eigenvalue distribution of the SFHSS-preconditioned matrix, we usually think it is very important to choice a suitable preconditioner 2PSFHSS in (4.2) to approximate infinitely A, hence, we may expect RSFHSS0 defined as in (4.2) to compute the quasi-optimal iteration parameters αexp and βexp.

    We begin our analysis by minimizing the following Frobenius norm of RSFHSS

    \begin{equation*} \begin{split} \Theta(\alpha,\beta)&\triangleq\|4\mathcal{R}_{SFHSS}\|_F\\ & = \left\|\left( \begin{array}{cc} \alpha I_n+\frac{4}{\alpha}HS & 0 \\ 0 & \beta B^TB \\ \end{array} \right)\right\|_F\\ & = n\alpha^2+4tr(HS-SH)-\frac{16}{\alpha^2}tr(HS^2H)+\beta^2tr(B^TB)^2. \end{split} \end{equation*}

    where tr(E) denotes the trace of the matrix E .

    By taking partial derivative for \Theta(\alpha, \beta) , we can obtain

    \frac{\partial\Theta(\alpha,\beta)}{\partial\alpha} = 2n\alpha+\frac{32}{\alpha^3}tr(HS^2H) \; \; \; \; \mbox{and}\; \; \; \; \frac{\partial\Theta(\alpha,\beta)}{\partial\beta} = 2\beta tr(B^TB)^2.

    It is obvious that \Theta(\alpha, \beta) has a minimum if

    \begin{equation*} \begin{split} \alpha_{exp} = 2\sqrt[4]{-\frac{tr(HS^2H)}{n}}. \end{split} \end{equation*}

    The proof of tr(HS^2H) < 0 refer to [26, Lemma 1]. In addition, in practical implementations, it seems a good idea to try some values as close to 0 as possible for the iteration parameter \beta_{exp}.

    In this section, we present some numerical experiments to test the feasibility and robustness of the generalized shift-HSS iteration method for solving the saddle point problem (1.1) arising from Oseen models of incompressible flow. In order to evaluate the performance of the proposed generalized shift-HSS preconditioner over some existing matrix splitting preconditioners, we compare the numerical results of the generalized shift-HSS preconditioner \mathcal{P}_{SFHSS} (2.1) with the GSS preconditioner \mathcal{P}_{GSS} (1.3), the SS preconditioner \mathcal{P}_{SS} (1.4), the MSS preconditioner \mathcal{P}_{MSS} (1.5), the GMSS preconditioner \mathcal{P}_{MSS} (1.5), the DPSS preconditioner \mathcal{P_{DPSS}} presented in [27,28] and the preconditioner \mathcal{P_{IDPSS}} established by [29], where the preconditioner \mathcal{P_{DPSS}} is defined by

    \begin{equation} \begin{split} \mathcal{P_{DPSS}} = \frac{1}{2\alpha}\left( \begin{array}{cc} \alpha I_n+A & \\ & \alpha I_m \\ \end{array} \right)\left( \begin{array}{cc} \alpha I_n & B \\ -B^T& \alpha I_m \\ \end{array} \right). \end{split} \end{equation} (5.1)

    Here, the optimal iteration parameter is given by \alpha_{exp} = \frac{||A||_F+2||B||_F}{2(n+m)} ([30]). In addition, the preconditioner \mathcal{P_{IDPSS}} is given by

    \begin{equation} \begin{split} \mathcal{P_{IDPSS}} = \frac{1}{2\alpha}\left( \begin{array}{cc} \alpha I_n+A & \\ & 2\alpha I_m \\ \end{array} \right)\left( \begin{array}{cc} \alpha I_n & B \\ -B^T& \\ \end{array} \right), \end{split} \end{equation} (5.2)

    the optimal iteration parameter is given by \alpha_{exp} = \frac{||A||_F+||B||_F}{2\sqrt{n}} ([24]).

    We use the above preconditioners to accelerate GMRES iteration method and compare these different preconditioned GMRES iteration methods in terms of both the number of iteration steps (denoted by IT) and elapsed CPU times in seconds (denoted by CPU). In our implementations, we choose the zero vector x^{(0)} = 0 as the initial guess, take the right-hand-side vector b so that the exact solutions u and v are the unity vectors with all entries equal to one, and set the stopping criterion to be the residual norm

    \mbox{RES} = \frac{\|b-Ax^{(k)}\|_2}{\|b-Ax^{(0)}\|_2} < 10^{-6}

    or the prescribed iteration number k_{max} = n , where x^{(k)} is the solution at the k th iteration. In actual applications, the iteration parameters \alpha and \beta for the preconditioner \mathcal{P}_{SFHSS} are chosen to be the experimentally found optimal value, which leads to the least numbers of iterations of the preconditioned GMRES method for each choice of the spatial mesh-sizes [18].

    Example 5.1. [31]. Consider the Oseen equation of form

    \begin{equation} \left\{\begin{array}{cc} &-\nu\bigtriangleup u+(v\cdot\nabla)u+\nabla p = f,\\ &\bigtriangledown\cdot u = 0,\\ \end{array}\right. \mbox{in}\; \; \Omega. \end{equation} (5.3)

    where \Omega is a bounded domain with suitable boundary conditions, the parameter \nu > 0 denotes the viscosity, u represents the vector field and stands for the velocity, v is the approximation of u from the previous Picard iteration, and p denotes the pressure. The test problem is the classical two-dimensional leaky-lid driven cavity problem. Here, we usually employ the "IFISS" software package proposed in [32] to discretize the Oseen problem (5.3) with the Q_2-Q_1 mixed finite element method on uniform grid. The generated saddle point system of type (1.1) has nonsymmetric positive definite sub-matrix B which corresponds to a discretization of the convection diffusion operator L[u]: = -\nu\bigtriangleup u+(v\cdot\nabla)u . In actual implementation, four values of the viscosity parameters are used, such as \nu = 1, 0.1, 0.01, 0.001 , and four increasing grids are selected, i.e., 16\times16 , 32\times32 , 64\times64 and 128\times128 grids.

    In Tables 17, we use GMRES iteration method in conjunction with the corresponding preconditioners and present IT and CPU with respect to different sizes of the discretization grids for different values of \alpha , \beta and \nu . As these tables show, we can easily know that the generalized shift-HSS method behaves much better than the MSS, GSS, SS, DPSS and IDPSS iteration methods, especially when problem size increases, the convergence rate of GMRES iteration method with the generalized shift-HSS preconditioner are much faster than that of GMRES iteration method with the GMSS, MSS, GSS, SS, DPSS and IDPSS preconditioners. Therefore, the generalized shift-HSS preconditioner is more efficient and stable to accelerate the convergence rate of the GMRES iteration method.

    Table 1.  IT and CPU with \nu = 1 , \alpha = 0.01 and \beta = 0.005 .
    Grid 64\times64 128\times128 256\times256 512\times512
    \mathcal{P}_{SFHSS} CPU 2.2468 19.5751 159.8347 978.9446
    IT 4 4 4 4
    \mathcal{P}_{SS} CPU 7.4697 127.3370 1.6111e+03 2.0024e+04
    IT 47 169 487 998
    \mathcal{P}_{GSS} CPU 6.1840 88.7871 1.2920e+03 8.1583e+04
    IT 36 116 392 4550
    \mathcal{P}_{MSS} CPU 6.2798 78.7841 660.9457 1.4831e+04
    IT 37 104 200 817
    \mathcal{P}_{GMSS} CPU 7.0076 86.0697 807.2316 1.1505e+04
    IT 42 113 244 631
    \mathcal{P}_{DPSS} CPU 81.6477 993.7621 7.5473e+03 1.0011e+05
    IT 260 573 627 1121
    \mathcal{P}_{IDPSS} CPU 50.0780 369.2814 2.9020e+03 1.8826e+04
    IT 164 216 243 200

     | Show Table
    DownLoad: CSV
    Table 2.  IT and CPU with \nu = 0.1 , \alpha = 0.01 and \beta = 0.005 .
    Grid 16\times16 32\times32 64\times64 128\times128
    \mathcal{P}_{SFHSS} CPU 0.0560 0.3080 2.4352 20.0278
    IT 3 3 4 4
    \mathcal{P}_{SS} CPU 0.0755 0.6873 7.9002 128.0045
    IT 11 20 47 169
    \mathcal{P}_{GSS} CPU 0.0590 0.4890 6.1251 89.8709
    IT 7 15 36 116
    \mathcal{P}_{MSS} CPU 0.0807 0.6550 6.6287 80.3747
    IT 13 17 37 104
    \mathcal{P}_{GMSS} CPU 0.0841 0.8567 7.4521 86.2577
    IT 14 24 42 113
    \mathcal{P}_{DPSS} CPU 0.9915 8.5094 83.2322 998.5375
    IT 87 180 255 573
    \mathcal{P}_{IDPSS} CPU 0.8671 8.6078 51.8677 373.8926
    IT 77 140 164 216

     | Show Table
    DownLoad: CSV
    Table 3.  IT and CPU with \nu = 0.01 , \alpha = 0.01 and \beta = 0.005 .
    Grid 16\times16 32\times32 64\times64 128\times128
    \mathcal{P}_{SFHSS} CPU 0.0690 0.3106 2.5461 19.4539
    IT 3 3 4 4
    \mathcal{P}_{SS} CPU 0.1031 0.7981 7.7896 130.0777
    IT 11 21 47 169
    \mathcal{P}_{GSS} CPU 0.0499 0.5213 5.9485 90.4792
    IT 8 15 36 116
    \mathcal{P}_{MSS} CPU 0.0950 0.5529 5.9219 83.7678
    IT 13 17 37 104
    \mathcal{P}_{GMSS} CPU 0.0903 0.7730 6.6637 89.1844
    IT 14 24 42 113
    \mathcal{P}_{DPSS} CPU 1.0676 9.0995 79.1151 1.0232e+03
    IT 87 150 260 573
    \mathcal{P}_{IDPSS} CPU 0.9469 8.0854 49.9778 372.0878
    IT 77 140 164 216

     | Show Table
    DownLoad: CSV
    Table 4.  IT and CPU with \nu = 1 , \alpha = \alpha_{exp} and \beta = 0.05 .
    Grid 16\times16 32\times32 64\times64 128\times128
    \mathcal{P}_{SFHSS} CPU 0.1059 0.4267 2.5201 20.0278
    IT 4 4 4 4
    \mathcal{P}_{SS} CPU 0.6867 11.1614 52.6255 913.3530
    IT 52 152 163 515
    \mathcal{P}_{GSS} CPU 0.2370 1.1227 4.5158 21.7143
    IT 18 15 13 13

     | Show Table
    DownLoad: CSV
    Table 5.  IT and CPU with \nu = 0.001 , \alpha = 0.01 and \beta = 0.005 .
    Grid 16\times16 32\times32 64\times64 128\times128
    \mathcal{P}_{SFHSS} CPU 0.0794 0.3281 2.4645 19.4732
    IT 3 3 4 4
    \mathcal{P}_{SS} CPU 0.0876 0.7553 7.6986 128.4442
    IT 11 21 47 169
    \mathcal{P}_{GSS} CPU 0.0620 0.5892 6.1956 89.6522
    IT 8 15 36 116
    \mathcal{P}_{MSS} CPU 0.0923 0.5876 5.9810 79.4758
    IT 15 17 37 104
    \mathcal{P}_{GMSS} CPU 0.0989 0.8472 7.0856 87.2737
    IT 14 24 42 113
    \mathcal{P}_{DPSS} CPU 1.0748 9.1363 81.7333 1.0026e+03
    IT 87 150 260 573
    \mathcal{P}_{IDPSS} CPU 0.9799 8.5531 51.4426 367.3563
    IT 77 140 164 216

     | Show Table
    DownLoad: CSV
    Table 6.  IT and CPU with \nu = 0.1 , \alpha = \alpha_{exp} and \beta = 0.05 .
    Grid 16\times16 32\times32 64\times64 128\times128
    \mathcal{P}_{SFHSS} CPU 0.0975 0.4061 2.4989 19.6638
    IT 4 4 4 4
    \mathcal{P}_{SS} CPU 0.6554 10.7887 52.3130 940.0833
    IT 52 152 163 510
    \mathcal{P}_{GSS} CPU 0.2531 1.1453 4.1881 22.5455
    IT 18 15 13 13

     | Show Table
    DownLoad: CSV
    Table 7.  IT and CPU with \nu = 0.01 , \alpha = \alpha_{exp} and \beta = 0.001 .
    Grid 16\times16 32\times32 64\times64 128\times128
    \mathcal{P}_{SFHSS} CPU 0.0595 0.2582 1.5179 12.3510
    IT 2 2 2 4
    \mathcal{P}_{SS} CPU 0.6741 11.4814 53.4351 939.0087
    IT 52 152 163 525
    \mathcal{P}_{GSS} CPU 0.2454 1.1735 4.6387 22.7694
    IT 18 15 13 13

     | Show Table
    DownLoad: CSV

    From Figures 14, we give the convergence history of the corresponding iteration methods to compare effects of the corresponding preconditioners with respect to the iteration parameters \alpha and \beta . It is easily seen that the generalized shift-HSS iteration method has more smooth convergence curves than the GMSS, MSS, GSS and SS iteration methods.

    Figure 1.  Convergence curves of GMRES iteration methods.
    Figure 2.  Convergence curves of GMRES iteration methods.
    Figure 3.  Convergence curves of GMRES iteration methods.
    Figure 4.  Convergence curves of GMRES iteration methods.

    Example 5.2. [33,34]. Consider the two-dimensional convection-diffusion equation

    \begin{equation} \begin{array}{cc} &-\nabla^2 u+q\bigtriangledown u = f(x,y)\; \; \; \; \mbox{in}\; \; \Omega = [0,1]\times[0,1], \end{array} \end{equation} (5.4)

    with Dirichlet boundary condition and the constant coefficient q . Similar to the three-dimensional case proposed in [35], the five-point centered finite difference discretization is used for the above equation, the nonsymmetric saddle point system (1.1) can be easily obtained, where

    \begin{equation*} \begin{split}A = \left( \begin{array}{cc} I_l\otimes T_r+T_r\otimes I_l & 0 \\ 0 & I_l\otimes T_r+T_r\otimes I_l \\ \end{array} \right)\in R^{2l^2\times 2l^2}, \end{split} \end{equation*}
    \begin{equation*} \begin{split}B = \left( \begin{array}{c} I_l\otimes F \\ F\otimes I_l \\ \end{array} \right)\in R^{2l^2\times l^2}, \end{split} \end{equation*}

    and

    \begin{equation*} \begin{split}T_r = \frac{1}{h^2}tridiag(-1-r,2,-1+r)\in R^{l\times l},\; \; \; \; \; \; \; F = \frac{1}{h}tridiag(-1,1,0)\in R^{l\times l}. \end{split} \end{equation*}

    Here, h = \frac{1}{l+1} represents an equidistant step-size in each coordinate direction, \otimes denotes the Kronecker product and r = qh/2 indicates the mesh Reynolds number.

    In Tables 811, from two aspects of IT and CPU, we use SFHSS, DPSS and IDPSS preconditioners to accelerate GMRES iteration method associated with different sizes of the discretization grids for different values of q with \alpha_{exp} and \beta = 0.00001 . As these tables show, we can easily know that the SFHSS method outperforms the DPSS and IDPSS methods, especially when problem size increases, the convergence rate of GMRES iteration method with the generalized shift-HSS preconditioner are much faster than that of GMRES iteration method with the DPSS and IDPSS preconditioners. Therefore, the generalized SFHSS preconditioner is more efficient and stable.

    Table 8.  IT and CPU with q = 0.01 , \alpha = \alpha_{exp} and \beta = 0.00001 .
    l 16 32 64 128
    \mathcal{P}_{SFHSS} CPU 0.0430 0.1943 1.1330 6.9972
    IT 3 4 4 5
    \mathcal{P}_{SS} CPU 0.5416 6.8962 142.8853 743.6954
    IT 68 127 650 680
    \mathcal{P}_{GSS} CPU 0.0949 0.4838 2.1202 10.0982
    IT 10 10 9 8

     | Show Table
    DownLoad: CSV
    Table 9.  IT and CPU with q = 0.1 , \alpha = \alpha_{exp} and \beta = 0.00001 .
    l 16 32 64 128
    \mathcal{P}_{SFHSS} CPU 0.0405 0.1926 1.1506 6.8099
    IT 4 4 4 5
    \mathcal{P}_{SS} CPU 0.5003 5.7164 62.4237 807.8969
    IT 66 152 341 670
    \mathcal{P}_{GSS} CPU 0.0804 0.3724 1.8443 8.7051
    IT 10 10 9 8

     | Show Table
    DownLoad: CSV
    Table 10.  IT and CPU with q = 1 , \alpha = \alpha_{exp} and \beta = 0.00001 .
    l 16 32 64 128
    \mathcal{P}_{SFHSS} CPU 0.0405 0.2352 1.3521 7.8656
    IT 4 4 4 5
    \mathcal{P}_{SS} CPU 0.7747 12.3715 151.9526 988.8506
    IT 86 258 694 840
    \mathcal{P}_{GSS} CPU 0.0899 0.5204 2.2006 9.8894
    IT 10 10 9 8

     | Show Table
    DownLoad: CSV
    Table 11.  IT and CPU with q = 10 , \alpha = \alpha_{exp} and \beta = 0.00001 .
    l 16 32 64 128
    \mathcal{P}_{SFHSS} CPU 0.0650 0.2451 1.3483 7.9823
    IT 4 4 4 5
    \mathcal{P}_{SS} CPU 0.6067 11.2988 165.5783 2.1922e+03
    IT 74 273 761 1844
    \mathcal{P}_{GSS} CPU 0.0964 0.4138 2.1823 17.0936
    IT 12 10 9 8

     | Show Table
    DownLoad: CSV

    The novelty of this present paper is the construction and analysis of the generalized shift-HSS iteration method for nonsingular saddle point systems with nonsymmetric positive definite (1, 1)-block. We investigate the convergence property of the SFHSS iteration method and further illustrate the robustness and efficiency of the generalized shift-HSS preconditioner by a numerical example. Future work should focus on developing the modified forms of the GSS iteration method and generalized shift GSOR-like method for complex symmetric linear system, and study the effects of iteration parameters on eigenvalue-clustering of the corresponding preconditioned matrices.

    This research is supported by Scientific and Technological Research Program of Chongqing Research Program of Basic Research and Frontier Technology (cstc2018jcyjAX0685), Xiangsi Lake Young Scholars Innovation Team of Guangxi Minzu University (No.2021RSCXSHQN05) and Natural Science Foundation of Guangxi Minzu University (2021KJQD01).

    The authors declared that they have no conflicts of interest to this work.



    [1] B. Y. Chen, A report on submanifolds of finite type, Soochow J. Math., 22 (1996), 117–337.
    [2] S. I. Abdelsalam, A. Magesh, P. Tamizharasi, Optimizing fluid dynamics: An in-depth study for nano-biomedical applications with a heat source, J. Therm. Anal. Calorim., 2024. https://doi.org/10.1007/s10973-024-13472-2
    [3] B. Y. Chen, Total mean curvature and submanifolds of finite type, 2 Eds., World Scientific Publisher, 2014. https://doi.org/10.1142/9237
    [4] T. Takahashi, Minimal immersions of Riemannian manifolds, J. Math. Soc. Japan, 18 (1966), 380–385. https://doi.org/10.2969/jmsj/01840380 doi: 10.2969/jmsj/01840380
    [5] O. J. Garay, An extension of Takahashi's theorem, Geom. Dedicata., 34 (1990), 105–112. https://doi.org/10.1007/BF00147319 doi: 10.1007/BF00147319
    [6] F. Dilen, J. Pas, L. Verstraelen, On surfaces of finite type in Euclidean 3-space, Kodai Math. J., 13 (1990), 10–21. https://doi.org/10.2996/kmj/1138039155 doi: 10.2996/kmj/1138039155
    [7] B. Y. Chen, P. Piccini, Submanifolds with finite type Gauss map, Bull. Austral. Math. Soc., 35 (1987), 161–186.
    [8] H. Al-Zoubi, H. Alzaareer, T. Hamadneh, M. Al Rawajbeh, Tubes of coordinate finite type Gauss map in the Euclidean 3-space, Indian J. Math., 62 (2020), 171–182.
    [9] H. Alzaareer, H. Al-Zoubi, F. Abdel-Fattah, Quadrics with finite Chen-type Gauss map, J. Prime Res. Math., 18 (2022), 96–107.
    [10] C. Baikoussis, L. Verstraelen, The Chen-type of the spiral surfaces, Results Math., 28 (1995), 214–223. https://doi.org/10.1007/BF03322254 doi: 10.1007/BF03322254
    [11] C. Baikoussis, L. Verstraelen, On the Gauss map of translation surfaces, Rend. Semi. Mat. Messina Ser II, in press.
    [12] C. Baikoussis, L. Verstraelen, On the Gauss map of helicoidal surfaces, Rend. Sem. Mat. Messina Ser. II, 16 (1993), 31–42.
    [13] F. Dillen, J. Pass, L. Verstraelen, On the Gauss map of surfaces of revolution, Bull. Inst. Math. Acad. Sin., 18 (1990), 239–246.
    [14] H. Al-Zoubi, M. Al-Sabbagh, Anchor rings of finite type Gauss map in the Euclidean 3-space, Int. J. Math. Comput. Methods, 5 (2020), 9–13.
    [15] H. Al-Zoubi, F. Abdel-Fattah, M. Al-Sabbagh, Surfaces of finite III-type in the Euclidean 3-space, WSEAS Trans. Math., 20 (2021), 729–735. https://doi.org/10.37394/23206.2021.20.77 doi: 10.37394/23206.2021.20.77
    [16] S. Stamatakis, H. Al-Zoubi, On surfaces of finite Chen-type, Results Math., 43 (2003), 181–190. https://doi.org/10.1007/BF03322734 doi: 10.1007/BF03322734
    [17] H. Al-Zoubi, H. Alzaareer, A. Zraiqat, T. Hamadneh, W. Al Mashaleh, On ruled surfaces of coordinate finite type, WSEAS Trans. Math., 21 (2022), 765–769. https://doi.org/10.37394/23206.2022.21.87 doi: 10.37394/23206.2022.21.87
    [18] H. Al-Zoubi, H. Alzaareer, M. Al-Rawajbeh, M. Al-Kafaween, Characterization of tubular surfaces in terms of finite III-type, WSEAS Trans. Syst. Control, 19 (2024), 22–26. https://doi.org/ 10.37394/23203.2024.19.3 doi: 10.37394/23203.2024.19.3
    [19] H. Al-Zoubi, T. Hamadneh, M. A. Hammad, M. Al-Sabbagh, M. Ozdemir, Ruled and quadric surfaces satisfying \triangle ^II\mathbf{N} = \Lambda\mathbf{N}, Symmetry, 15 (2023), 300. https://doi.org/10.3390/sym15020300 doi: 10.3390/sym15020300
    [20] J. Arroyo, O. J. Garay, J. J. Mencía, On a family of surfaces of revolution of finite Chen-type, Kodai Math. J., 21 (1998) 73–80.
    [21] H. Al-Zoubi, T. Hamadneh, H. Alzaareer, M. Al-Sabbagh, Tubes in the Euclidean 3-space with coordinate finite type Gauss map, In: 2021 International conference on information technology (ICIT), Jordan: IEEE, 2021. https://doi.org/10.1109/ICIT52682.2021.9491118
    [22] H. Al-Zoubi, A. Dababneh, M. Al-Sabbagh, Ruled surfaces of finite II-type, WSEAS Trans. Math., 18 (2019), 1–5.
    [23] S. Stamatakis, H. Al-Zoubi, Surfaces of revolution satisfying \triangle^III\mathbf{x} = A\mathbf{x}, arXiv: 1510.08479, 2010. https://doi.org/10.48550/arXiv.1510.08479
    [24] H. Al-Zoubi, T. Hamadneh, A. Alkhatib, Quadric surfaces of coordinate finite type Gauss map in the Euclidean 3-space, Indian J. Math., 64 (2022), 385–399.
    [25] H. Al-Zoubi, H. Alzaareer, Non-degenerate translation surfaces of finite III-type, Indian J. Math., 65 (2023), 395–407.
    [26] H. Huck, R. Roitzsch, U. Simon, W. Vortisch, R. Walden, B. Wegner, et al., Beweismethoden der Differentialgeometrie im Grossen, In: Lecture notes in mathematics, Heidelberg: Springer Berlin, 335 (1973). https://doi.org/10.1007/BFb0061770
    [27] H. Al-Zoubi, W. Al Mashaleh, Surfaces of finite type with respect to the third fundamental form, In: 2019 IEEE Jordan international joint conference on electrical engineering and information technology (JEEIT), Jordan: IEEE, 2019,174–178. https://doi.org/10.1109/JEEIT.2019.8717507
    [28] Y. H. Kim, C. W. Lee, D. W. Yoon, On the Gauss map of surfaces of revolution without parabolic points, Bull. Korean Math. Soc., 46 (2009), 1141–1149. https://doi.org/10.4134/BKMS.2009.46.6.1141 doi: 10.4134/BKMS.2009.46.6.1141
  • Reader Comments
  • © 2025 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(114) PDF downloads(27) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog