Loading [MathJax]/jax/output/SVG/jax.js
Research article

Study on sensible beginning divided-search enhanced Karnik-Mendel algorithms for centroid type-reduction of general type-2 fuzzy logic systems

  • General type-2 fuzzy logic systems (GT2 FLSs) on the basis of alpha-plane representation of GT2 fuzzy sets (FSs) have attracted considerable attention in recent years. For the kernel type-reduction (TR) block of GT2 FLSs, the enhanced Karnik-Mendel (EKM) algorithm is the most popular approach. This paper proposes the sensible beginning divided-search EKM (SBDEKM) algorithms for completing the centroid TR of GT2 FLSs. Computer simulations are provided to show the performances of the SBDEKM algorithms. Compared with EKM algorithms and sensible beginning EKM (SBEKM) algorithms, the SBDEKM algorithms have almost the same accuracies and better computational efficiency.

    Citation: Yang Chen, Chenxi Li. Study on sensible beginning divided-search enhanced Karnik-Mendel algorithms for centroid type-reduction of general type-2 fuzzy logic systems[J]. AIMS Mathematics, 2024, 9(5): 11851-11876. doi: 10.3934/math.2024580

    Related Papers:

    [1] Ali Hassani . Singular expansion of the wave kernel and harmonic sums on Riemannian symmetric spaces of the non-compact type. AIMS Mathematics, 2025, 10(3): 4775-4791. doi: 10.3934/math.2025219
    [2] Maliha Rashid, Amna Kalsoom, Maria Sager, Mustafa Inc, Dumitru Baleanu, Ali S. Alshomrani . Mellin transform for fractional integrals with general analytic kernel. AIMS Mathematics, 2022, 7(5): 9443-9462. doi: 10.3934/math.2022524
    [3] Tong Wu, Yong Wang . Super warped products with a semi-symmetric non-metric connection. AIMS Mathematics, 2022, 7(6): 10534-10553. doi: 10.3934/math.2022587
    [4] Shahid Mubeen, Rana Safdar Ali, Iqra Nayab, Gauhar Rahman, Thabet Abdeljawad, Kottakkaran Sooppy Nisar . Integral transforms of an extended generalized multi-index Bessel function. AIMS Mathematics, 2020, 5(6): 7531-7547. doi: 10.3934/math.2020482
    [5] Yusuf Dogru . η-Ricci-Bourguignon solitons with a semi-symmetric metric and semi-symmetric non-metric connection. AIMS Mathematics, 2023, 8(5): 11943-11952. doi: 10.3934/math.2023603
    [6] Nitu Gupta, V. R. Lakshmi Gorty . On distributional finite continuous Radon transform in certain spaces. AIMS Mathematics, 2021, 6(1): 378-389. doi: 10.3934/math.2021023
    [7] Mohamed Akel, Muajebah Hidan, Salah Boulaaras, Mohamed Abdalla . On the solutions of certain fractional kinetic matrix equations involving Hadamard fractional integrals. AIMS Mathematics, 2022, 7(8): 15520-15531. doi: 10.3934/math.2022850
    [8] Amira Ishan . On concurrent vector fields on Riemannian manifolds. AIMS Mathematics, 2023, 8(10): 25097-25103. doi: 10.3934/math.20231281
    [9] Dongjie Gao, Peiguo Zhang, Longqin Wang, Zhenlong Dai, Yonglei Fang . A novel high-order symmetric and energy-preserving continuous-stage Runge-Kutta-Nyström Fourier pseudo-spectral scheme for solving the two-dimensional nonlinear wave equation. AIMS Mathematics, 2025, 10(3): 6764-6787. doi: 10.3934/math.2025310
    [10] D. L. Suthar, A. M. Khan, A. Alaria, S. D. Purohit, J. Singh . Extended Bessel-Maitland function and its properties pertaining to integral transforms and fractional calculus. AIMS Mathematics, 2020, 5(2): 1400-1410. doi: 10.3934/math.2020096
  • General type-2 fuzzy logic systems (GT2 FLSs) on the basis of alpha-plane representation of GT2 fuzzy sets (FSs) have attracted considerable attention in recent years. For the kernel type-reduction (TR) block of GT2 FLSs, the enhanced Karnik-Mendel (EKM) algorithm is the most popular approach. This paper proposes the sensible beginning divided-search EKM (SBDEKM) algorithms for completing the centroid TR of GT2 FLSs. Computer simulations are provided to show the performances of the SBDEKM algorithms. Compared with EKM algorithms and sensible beginning EKM (SBEKM) algorithms, the SBDEKM algorithms have almost the same accuracies and better computational efficiency.



    Consider the large and sparse saddle-point problem Ax=b of form:

    Ax=(ABBT0)(uv)=(fg), (1.1)

    where ARn×n is nonsymmetric positive definite, in other words, its symmetric part H=12(A+AT) is positive definite, BRn×m(nm) is a full column rank matrix, fRn and gRm are given vectors, and BT denotes the transpose of B. Under these assumptions, the nonsingularity of A assures that the existence and uniqueness of the saddle problem (1.1). When the matrices A and B are large and sparse, this class of linear system of the form (1.1) arises in many problems of scientific computing and engineering applications, including computational fluid dynamics, optimization, constrained least squares problems, generalized least squares problems, incompressible flow problems, mixed finite element approximation of elliptic PDEs and so forth. See [1] and references therein for more examples and additional information. In addition, we also refer to [2] for periodic Sylvester matrix equations and generalized coupled Sylvester matrix equations.

    In general, the coefficient matrix A of (1.1) is usually large, sparse and extremely ill-conditioned, it is a common belief that iteration methods become more attractive than direct methods in terms of storage requirements and computing time. As is well known, a clustered spectrum of preconditioned matrix often results in rapid convergence for Krylov subspace methods. Therefore, in light of the special structure of the saddle point problem (1.1), for the sake of achieving rapid convergence rate and improving computational efficiency, a lot of effective and practical preconditioning techniques have been investigated in the literatures, such as preconditioned Krylov subspace methods [3], SOR-type methods [4,5], Uzawa-type methods [6,7], HSS-type methods and its accelerated variants [8,9,10], and shift-splitting (denoted by SS) and generalized shift-splitting (denoted by GSS) iteration methods [11,12,13], and so forth.

    In the past few years, many (GSS-)SS-type iteration methods have been proposed to greatly improve convergence rate by choosing the appropriate iteration parameters. Based on the theory of the shifting and clustered spectrum of the coefficient matrix, Bai and Zhang [12] presented a regularized conjugate gradient method for symmetric positive definite system. In [13], Bai et al. investigated a kind of shift-splitting iteration method for solving large and sparse non-Hermitian positive definite system. A modification of the SS preconditioner (i.e., the GSS preconditioner) has been established in [14,15] with a symmetric positive definite (1, 1)-block sub-matrix A for the positive iteration parameters α and β. Here, the GSS iteration method is defined as

    12(αIn+ABBTβIm)xk+1=12(αInABBTβIm)xk+(fg), (1.2)

    where the related GSS preconditioner is given by

    PGSS=12(αIn+ABBTβIm), (1.3)

    with Iq denoting the identity matrix of size q. Particularly, as α=β, the GSS iteration method (1.2) reduces to the SS iteration method [16], where the SS preconditioner is given by the following form

    PSS=12(αIn+ABBTαIm). (1.4)

    For solving the saddle problem (1.1), Cao et al. [17] applied the GSS iteration method (1.2) to solve the saddle point system (1.1) and proposed a deteriorated shift-splitting (DSS) preconditioner by removing the shift parameter α in the (1,1)-block of the GSS preconditioner. Based on the symmetric and skew-symmetric splitting A=H+S with S=12(AAT) of the (1,1)-block sub-matrix A, Zhou et al. [18] proposed a modified shift-splitting (denoted by MSS) preconditioner of form

    PMSS=12(αIn+2HBBTαIm), (1.5)

    and proposed a sufficient condition on the iteration parameter α for the convergence of the MSS iteration method. In [19], Huang et al. proposed a generalized modified shift-splitting iteration method (denoted by GMSS) by introducing two iteration parameters α and β instead of one parameter α, which further improve the convergence rate of the MSS iteration method, where the corresponding GMSS preconditioner is given by

    PGMSS=12(αIn+2HBBTβIm). (1.6)

    The aim of this paper is to propose an improvement for the (GSS-)SS-type iteration methods and investigate the SFHSS preconditioner for Krylov subspace methods, such as GMRES iteration method [20], which significantly accelerate the convergence speed of the related iterative method. According to theoretical analysis, when the iteration parameters α and β satisfy certain conditions, we prove that the SFHSS iteration method converges to the unique solution of the saddle point system (1.1). The rest of this paper is organized as follows. In Section 2, we review the SFHSS iteration method and its implementations. In Section 3, we analysis the convergent property of the SFHSS iteration method for nonsingular saddle point problems with nonsymmetric positive definite (1, 1)-block. In Section 4, we discuss the algebraic properties of the resulted SFHSS preconditioned matrix. Finally, numerical experiments arise from finite element discretization of the Oseen problem are presented in Section 5 to show the correctness of the theoretical analysis and the feasibility of the SFHSS iteration method.

    The main purpose of this section is to introduce the new GSS-type iteration method (i.e., the SFHSS iteration method) for the nonsingular saddle-point problems (1.1). Based on the local Hermitian and skew-Hermitian splitting scheme of the non-Hermitian positive definite (1, 1)-block sub-matrix, Yang and Wu [6] presented the Uzawa-HSS iteration method. Let A=H+S be the symmetric and skew-symmetric splitting of the (1, 1)-block of the coefficient matrix A defined as in (1.1) with S=12(AAT). Following the idea of the Uzawa-HSS iteration method, we make the following matrix splitting for the coefficient matrix A of the linear system (1.1):

    A=PSFHSSNSFHSS,

    where the shift-HSS preconditioner PSFHSS is given by

    PSFHSS=14(1α(αIn+2H)(αIn+2S)2B2BTβBTB) (2.1)

    and the matrix NSFHSS is defined as

    NSFHSS=14(1α(αIn2H)(αIn2S)2B2BTβBTB).

    On the basis of the above splitting, we derive the following SFHSS iteration method

    PSFHSSxk+1=NSFHSSxk+b, (2.2)

    and the generalized shift-HSS iteration matrix τ(α,β) can be constructed as follows

    τ(α,β)=P1SFHSSNSFHSS. (2.3)

    Denoted by

    Θ1=1α(αIn+2H)(αIn+2S),Θ2=1α(αIn2H)(αIn2S),
    Φ1=βBTB+4BTΘ11B,Φ2=Θ1+4B(βBTB)1BT.

    More precisely, multiply both sides of (2.2) from the left by the matrices

    (In2B(βBTB)10Im)and(In02BTΘ11Im),

    respectively, then we can easily obtain the explicit formulations of u(k+1) and v(k+1), respectively. Therefore, the iteration scheme (2.2) naturally leads to the following algorithmic description of the SFHSS iteration method:

    Method 2.1. (The SFHSS method) Given an initial guess x0=[u(0)T,v(0)T]T, for k=0,1,2,, until the iteration sequence [u(k),v(k)] converges,

    (i) Compute u(k) from

    u(k+1)=u(k)4Φ12[A+2B(βBTB)1BT]u(k)4Φ12Bv(k)+4Φ12[f2B(βBTB)1g],

    (ii) Compute v(k) from

    v(k+1)=v(k)+2Φ11BT(In+Θ11Θ2)u(k)8Φ11BTΘ11Bv(k)+4Φ11(g+2BTΘ11f).

    Following the iteration scheme (2.2) again, it is not difficult to know that the SFHSS iteration method can be written as the following fixed point form

    xk+1=τ(α,β)xk+4P1SFHSSb. (2.4)

    It is easy to see that at each step of the SFHSS iteration method (2.4) or the preconditioned Krylov subspace method, such as GMRES iteration method, we need to solve a linear system PSFHSSz=r, i.e.,

    14(1α(αIn+2H)(αIn+2S)2B2BTβBTB)(z1z2)=(r1r2), (2.5)

    where z1,r1Rn and z2,r2Rm. A brisk calculation confirms that

    PSFHSS=14(In2B(βBTB)10Im)×(Φ200βBTB)(In02(βBTB)1BTIm). (2.6)

    From the decomposition of PSFHSS in (2.6), it is obvious that

    (z1z2)=4(In02(βBTB)1BTIm)(Φ1200(βBTB)1)×(In2B(βBTB)10Im)(r1r2). (2.7)

    Then we have the following algorithmic implementation of the SFHSS iteration method.

    Algorithm 2.1. For a given vector (rT1,rT2)T, the vector (zT1,zT2)T can be derived from (2.7) by the following steps:

    (1) solve BTBt=1βr2.

    (2) solve Φ2z1=4(r12Bt);

    (3) solve BTBp=2βBTz1.

    (4) compute z2=p+4t.

    Since BTB is symmetric positive definite and (αIn+2H)(αIn+2S)+4B(βBTB)1BT is nonsymmetric positive definite, then in practical implementations, the sub-linear systems BTBz=r can be solved by the Cholesky factorization and [1α(αIn+2H)(αIn+2S)+4B(βBTB)1BT]z=r contained in the SFHSS iteration method can be solved inexactly by the sparse ILU factorization or gmres iteration method.

    Consider the solution of a linear system of form Ax=b. To a matrix AR(n+m)×(n+m), if M is nonsingular, then the representation A=MN is called as a splitting. Assume that the iteration matrix T=M1N and c=M1b, then the stationary iteration scheme of the linear system Ax=b is defined as

    xk+1=Txk+c. (3.1)

    As is well known, if the spectral radius ρ(T) of the iteration matrix T is less than one, then the stationary iteration scheme (3.1) converges to the unique solution of Ax=b, for any choice of the initial vector x0.

    Let λ be an eigenvalue of the iteration matrix τ(α,β) in (2.3) and xT=(u,v) be the corresponding eigenvector. To study the convergence property of the generalized shift-HSS iteration method (2.4), we need to consider the following generalized eigenvalue problem:

    (1α(αIn2H)(αIn2S)2B2BTβBTB)(uv)=λ(1α(αIn+2H)(αIn+2S)2B2BTβBTB)(uv), (3.2)

    By straightforward computation, the generalized eigenvalue problem (3.2) is equivalent to the following form

    {1α(αIn2H)(αIn2S)u2Bv=1αλ(αIn+2H)(αIn+2S)u+2λBv,2BTu+βBTBv=2λBTu+βλBTBv. (3.3)

    For convenience, we denote

    uAuuu=a+bi,uHSuuu=c+dianduB(BTB)1BTuuu=e, (3.4)

    where a>0, e0,

    c=12u(HSSH)uuuandd=12iu(HS+SH)uuu.

    In what follows, we give the following lemmas to verify the convergence of the SFHSS iteration method (2.4).

    Lemma 3.1. ([21]). If S is a skew-Hermitian matrix, then iS is a Hermitian matrix and uSu is a purely imaginary number or zero for all uCn.

    Lemma 3.2. ([22]). Both roots of the complex quadratic equation λ2Φλ+Ψ=0 have modulus less than one if and only if |Φ¯ΦΨ|+|Ψ|2<1, where ¯Φ denotes the conjugate complex of Φ.

    Lemma 3.3. Let A be nonsymmetric positive definite and B have full column rank. Assume λ is an eigenvalue of the iteration matrix τ(α,β) defined as in (2.3) with β>0, if α>0, then λ1, and if α2>4|c|max with |c|max denoting the maximum of c, then λ1.

    Proof. Following the spirit of the proof of [17]. If λ=1, then it is straightforward to show that the Eq (3.2) yield the following result

    (ABBT0)(uv)=(00).

    Since A is nonsymmetric positive definite and B has full column rank, then we can easily know that u=0 and v=0. This is a contradiction as (u,v) is an eigenvector.

    If λ=1, then the Eq (3.2) reduce to the following forms

    {(α2In+4HS)u=0,βBTBv=0.

    If α2>4|c|max, we can easily know that α2In+4HS is nonsingular. Therefore, it is easy to see that u=0 and v=0. The result seems to contradict with (u,v) being an eigenvector.

    Thus, we complete the proof.

    Lemma 3.4. Let the conditions of Lemma 3.3 be satisfied. Assume that λ is an eigenvalue of the iteration matrix τ(α,β) defined as in (2.3) and (u,v) is the corresponding eigenvector with uCn and vCm, if 0u(BT), then |λ|<1.

    Proof. We demonstrate the verification of u0. Unless, if u=0, then it follows from the second of the Eq (3.2) that α(λ1)BTBv=0. According to Lemma 3.3, since λ1, then BTBv=0. As B has full column rank, then BTB is nonsingular. Therefore, we further conclude v=0. This is a contradiction since (u,v) is an eigenvector, so u0.

    We now turn to verify |λ|<1. Assume u(BT) with u2=1, from the second of the Eq (3.2), we get v=0. Following [8, Theorem 2.2] and multiplying the first of the Eq (3.2) from the left-hand side by u, it is obvious that

    |λ|=|u(αIn+2S)1(αIn+2H)1(αIn2H)(αIn2S)u|≤∥(αIn+2S)1(αIn+2H)1(αIn2H)(αIn2S)2≤∥(αIn2H)(αIn+2H)12=maxi|α2λi(H)α+2λi(H)|<1,

    where λi(H) denotes the ith eigenvalue of the symmetric positive definite matrix H.

    Therefore, the proof is completed.

    Theorem 3.1. Let the conditions of Lemma 3.3 be satisfied. Assume that λ is an eigenvalue of the iteration matrix τ(α,β) defined as in (2.3), if the positive iteration parameters α and β satisfy the following conditions

    If ac+bd0, then

    α2>4|c|maxandβ>16d2eα3a2,

    and if ac+bd<0, then

    α2>max{4|ac+bd|maxa,4|c|max}andβ>16d2eαa[α2a|4ac+4bd|].

    then the iteration method (2.4) converges to the unique solution of the nonsymmetric saddle point problem (1.1), i.e.,

    |λ|<1.

    Proof. Combine Lemmas 3.3 and 3.4, in order to complete the proof, we need only to verify the case BTu0. Suppose u(BT), then the second of the Eq (3.2) yields the following result

    v=2(λ+1)β(λ1)(BTB)1BTu. (3.5)

    By substituting the relationship (3.5) into the first of the Eq (3.2), we have

    (αIn2H)(αIn2S)u=λ(αIn+2H)(αIn+2S)u+4α(λ+1)2β(λ1)B(BTB)1BTu. (3.6)

    Multiplying the Eq (3.6) from the left-hand side by u, after straightforward calculations, then the Eq (3.6) yields the following form

    α2β(λ1)2+2αβ(λ21)uAuuu+4β(λ1)2uHSuuu+4α(λ+1)2uB(BTB)1BTuuu=0. (3.7)

    Following (3.4), a quadratic equation of λ is derived from the Eq (3.7). After some algebra, it is straightforward to show that

    [(α2+2αa+4c)β+4αe+2(αb+2d)βi]λ2+2(4αeα2β4βc4βdi)λ+α2β2αβa+4βc+4αe+2β(2dαb)i=0. (3.8)

    If (α2+2αa+4c)β+4αe+2(αb+2d)βi=0, then it is easy to see that (α2+2αa+4c)β+4αe=0 and αb+2d=0. Therefore, the Eq (3.8) yields

    λ=α2β2αβa+4βc+4αe+2β(2dαb)i2(4αeα2β4βc4βdi)=βa+βbi4e+βa+βbi.

    By Lemma 3.3, we get λ±1 as α2>4|c|max. Note that a>0 and e0, we have

    |λ|=(βa)2+(βb)2(4e+βa)2+(βb)2<1.

    In what follows, we consider the case (α2+2αa+4c)β+4αe+2(αb+2d)βi0. From lemma 3.2, we know that |λ|<1 if and only if |Φ¯ΦΨ|+|Ψ|2<1. For convenience, we denote Φ and Ψ by

    Φ=2(4αeα2β4βc4βdi)(α2+2αa+4c)β+4αe+2(αb+2d)βi

    and

    Ψ=α2β2αβa+4βc+4αe+2β(2dαb)i(α2+2αa+4c)β+4αe+2(αb+2d)βi.

    After straightforward computation, we have

    |Φ¯ΦΨ|+|Ψ|2=8αβΓ+(16de)2+Υ+4β2(2dαb)2(α2β+2αβa+4βc+4αe)2+4β2(2d+αb)2,

    where Γ=(4αaeα2βa4βac4βbd)2 and Υ=(α2β2αβa+4βc+4αe)2.

    The following inequality

    |Φ¯ΦΨ|+|Ψ|2<8αβΓ+16αae(α2βa+4βac+4βbd)+Υ+4β2(2dαb)2(α2β+2αβa+4βc+4αe)2+4β2(2d+αb)2=8αβ(4αae+α2βa+4βac+4βbd)+Υ+4β2(2dαb)2(α2β+2αβa+4βc+4αe)2+4β2(2d+αb)2=1

    holds true for this case

    16αae(α2βa+4βac+4βbd)>(16de)2. (3.9)

    This implies

    α2a+4ac+4bd>0. (3.10)

    Following the inequalities (3.9) and (3.10), if ac+bd0, we have

    α2>0>4(ac+bd)a,

    and

    β>16d2eα3a216d2eαa[α2a+(4ac+4bd)].

    If ac+bd<0, then we get

    α2>4|ac+bd|maxa4(ac+bd)a>0,

    and

    β>16d2eαa[α2a|4ac+4bd|]16d2eαa[α2a+(4ac+4bd)].

    By making use of α2>4|c|max, we complete the proof.

    For the sake of convenience, we denote the maxima of |bd|, |d| and e by |bd|max, |d|max and emax, respectively, and denote the minimums of a, b, c and d by amin, bmin, cmin, and dmin, respectively. According to Theorem 3.1, we give the following sufficient conditions for the convergence of the SFHSS iteration method (2.4).

    Corollary 3.1. Let the conditions of Theorem 3.1 be satisfied. If the positive iteration parameters α and β satisfy:

    If ac+bd0, then

    α>2|c|max,andβ>16|d|2maxemaxα3a2min.

    If ac+bd<0, then

    α2>2|c|max+|bd|maxamin,

    and

    β>16d2eαamin[α2amin4|ac+bd|max].

    Then the generalized shift-HSS iteration method (2.4) converges to the unique solution of the nonsymmetric saddle point problem (1.1).

    Proof. According to Theorem 3.1, if ac+bd0, then

    α2>4|c|max,β>16|d|2maxemaxα3a2min16d2eα3a2,

    and if ac+bd<0, then

    α2>4|c|max+|bd|maxamax{4|ac+bd|maxa,4|c|max},

    and

    β>16d2eαamin[α2amin|4ac+4bd|max]16d2eαa[α2a(4ac+4bd)].

    Therefore, we complete the proof.

    The main objective of this section is to introduce some elegant inclusion regions for the spectrum of P1SFHSSA for the saddle point problem (1.1).

    In the following, to derive some related bounds of the eigenvalues of the preconditioned saddle point matrix P1SFHSSA, we study the eigenvalue problem P1SFHSSAx=ηx, that is to say

    Ax=ηPSFHSSx, (4.1)

    where η denotes an any eigenvalue of the preconditioned matrix P1SFHSSA with the corresponding eigenvector x=(u,v).

    For simplicity, we denote vBTBv by σ2 and the null space of BT by (BT), at the same time, the matrix RSFHSS is defined by

    RSFHSS=14(αIn+4αHS00βBTB),

    then it is easy to see that

    PSFHSS=RSFHSS+12A. (4.2)

    After some algebra, we can rewrite the generalized eigenvalue problem (4.1) as

    (1η2)Ax=ηRSFHSSx. (4.3)

    Following Lemma 3.3, since the eigenvalue λ=1η of τ(α,β) satisfies λ1 with α2>4|c|max, then η=1λ2. So, 112η0, we set

    θ=2η2η,for whichη=2θθ+2=24θ+2.

    For convenience, we use (θ) and (θ) to denote the real part and image part of the eigenvalue θ, respectively.

    We can explicitly write the equivalent eigenproblem Ax=θRSFHSSx as

    (ABBT0)(uv)=θ4(αIn+4αHS00βBTB)(uv). (4.4)

    The equivalent results to Eq (4.4) are given by

    {Au+Bv=θ(α4In+1αHS)u,BTu=14βθBTBv. (4.5)

    It is obvious that u0, otherwise the second equation of (4.5) would implies θ=0 or v=0. However, from Lemma 3.3, neither of them can be satisfied. So, u0. If v=0 and α2>4|c|max, then

    Theorem 4.1. Let the conditions of Lemma 3.3 be satisfied. Assume (η,x) is an eigenpair of (4.1) with x=(u,v) and u2=1. Then for α,β>0, the eigenvalue η can be written as η=2θθ+2, where θ satisfies the following:

    (i) If v=0, then u(BT) and

    4α|λ(A)|minα2+8αcmax+16ρ2(HS)|θ|4αρ(A)α2+8αcmin+16|λ(HS)|2min, (4.6)

    with

    (θ)=4α(α2a+4ac+4bd)(α2+4c)2+16d2and(θ)=4α(α2b+4bc4ad)(α2+4c)2+16d2. (4.7)

    (ii) If v0, then u(BT) and

    4α(α2+4cmin)|λ(A)|minΘ(α,β)<|θ|4αρ(A)Π(α,β)Υ(α,β). (4.8)

    with

    (θ)=4α(α2a+4ac+4bdαβaσ2)(α2+4c)2(αβσ2)2+16d2,(θ)=4α(α2b+4bc+αβbσ24ad)(α2+4c)2(αβσ2)2+16d2, (4.9)

    and

    Π(α,β)=α2(α2+8cmax)+16ρ2(HS)+2(α2+4cmax)αβσ2max+(αβσ2max)2,ω1(α,β)=α2(α2+8cmin)+16|λ(HS)|2min(αβσ2max)2,
    ω2(α,β)=(αβσ2min)2α2(α2+8cmax)16ρ2(HS),Υ(α,β)=min{ω1(α,β),ω2(α,β)},Θ(α,β)=α2(α2+8cmin)+16ρ2(HS)+(αβσ2max)2.

    Proof. In order to obtain the inequalities (4.6) and (4.8), we need to consider two cases: (i) v=0, (ii) v0.

    We now turn to verify (i). If v=0, from (4.5), it is easy to see that

    Au=θ(α4In+1αHS)u. (4.10)

    Multiplying u to the two sides of (4.10) from left, it then from (3.4) that

    θ=4α(a+bi)(α2+4c)+4di=4α(α2a+4ac+4bd)+4α(α2b+4bc4ad)i(α2+4c)2+16d2. (4.11)

    Consequently, we obtain (4.7). After some algebra, it is straightforward to show that

    |θ|=2(θ)+2(θ)=4αa2+b2(α2+4c)2+16d2. (4.12)

    By straightforward calculation, we can get the inequality (4.6).

    We demonstrate the validity of (ii). If v0, multiplying by u from left, then the first of the Eq (4.5) yields

    uAu+uBv=θu(α4In+1αHS)u. (4.13)

    Multiplying the transposed conjugate of the second of Eq (4.5) by v, we get

    uBv=14β¯θvBTBv. (4.14)

    Substituting (4.14) into (4.13), we obtain

    4αuAu=α2θ+4θuHSu+αβ¯θvBTBv. (4.15)

    Following the above notes, it will be shown that

    4α(a+bi)=α2θ+4θ(c+di)+αβ¯θσ2. (4.16)

    It is obvious to obtain that

    (α2+4c+αβσ2)(θ)4d(θ)=4αa,

    and

    (α2+4cαβσ2)(θ)+4d(θ)=4αb.

    Through direct calculations, we get (4.9) and

    |θ|=2(θ)+2(θ)=4αφ(α,β)+ψ(α,β)(α2+4c)2(αβσ2)2+(4d)2, (4.17)

    where

    φ(α,β)=(a2+b2)[(α2+4c)2+(4d)2+(αβσ2)2],

    and

    ψ(α,β)=2[(b2a2)(α2+4c)8abd]αβσ2.

    Since α2a+4ac+4bd>0, then we have

    φ(α,β)+ψ(α,β)<φ(α,β)+2(b2a2)(α2+4c)αβσ2+4αβσ2a2(α2+4c)=φ(α,β)+2(b2+a2)(α2+4c)αβσ2=(a2+b2)[(α2+4c+αβσ2)2+(4d)2]. (4.18)

    Consider

    (α2+4c)2+(4d)2>(αβσ2)2,

    or

    (α2+4c)2+(4d)2<(αβσ2)2.

    By straightforward computation, we can find that

    |θ|<(a2+b2)[(α2+4c+αβσ2)2+(4d)2]|(α2+4c)2(αβσ2)2+16d2|<4α(a2+b2)Π(α,β)Υ(α,β). (4.19)

    Additionally, as ab, use α2a+4ac+4bd>0 again, we have

    φ(α,β)+ψ(α,β)>φ(α,β)+2[(b2a2)(4bda)8abd]αβσ2=(a2+b2)[(α2+4c)2+(4d)2+(αβσ2)28(ba)dαβσ2](a2+b2)[(α2+4c)2+(4d)2+(αβσ2)28dαβσ2](a2+b2)(α2+4c)2. (4.20)

    As a<b, then we obtain

    φ(α,β)+ψ(α,β)>φ(α,β)16a2dαβσ2>(a2+b2)(α2+4c)2+2a2[(4d)2+(αβσ2)28dαβσ2]>(a2+b2)(α2+4c)2. (4.21)

    Hence, combine (4.20) and (4.21), we have

    |θ|>(a2+b2)(α2+4c)(α2+4c)2+(αβσ2)2+(4d)24α|λ(A)|min(α2+4cmin)Θ(α,β). (4.22)

    Hence, we complete the proof.

    Remark 4.1. Following Theorem 4.1, the eigenvalue θ satisfies two cases:

    (i) If v=0 and α2b+4bc=4ad, then u(BT) and the real eigenvalue

    θ=aα2+4c>0

    is bounded by

    aminα2+4cmax<θ<amaxα2+4cmin.

    (ii) If v0 and α2b+4bc+αβbσ2=4ad, then u(BT) and the real eigenvalue

    θ=aα2+4c+αβσ2>0

    is bounded as

    aminα2+4cmax+αβσ2max<θ<amaxα2+4cmin+αβσ2min.

    Theorem 4.2. Let the conditions of Theorem 4.1 be satisfied. For any iteration parameters α,β>0, then the eigenvalue η of the SFHSS preconditioned matrix P1SFHSSA satisfies

    2|θ|min|θ|min+2|η|<2|θ|max|θ|2max+4. (4.23)

    Proof. For any iteration parameters α,β>0, since |θ|=2(θ)+2(θ), then we get 0<(θ)|θ|. From η=2θθ+2, it is easy to see that

    |η|=2|θ||θ|2+4(θ)+4.

    Hence, we have

    2|θ||θ|+2|η|<2|θ||θ|2+4.

    Together the monotone properties of 2|θ||θ|+2 and 2|θ||θ|2+4 with respect to |θ|, we complete the proof of Theorem 4.2.

    Combine Theorems 4.1 and 4.2, we can find the following result.

    Remark 4.2. Combine Theorems 4.1 and 4.2, for α,β>0, some refined bounds for the eigenvalue η of the SFHSS preconditioned matrix P1SFHSSA are given by

    (i) If v=0, then u(BT) and

    4α|λ(A)|min2α|λ(A)|min+α2+8αcmax+16ρ2(HS)|η|4αρ(A)F(α),

    where

    F(α)=4α2ρ2(A)+α2+8αcmin+16|λ(HS)|2min.

    (ii) If v0, then

    4α|λ(A)|min(α2+4cmin)2α|λ(A)|min(α2+4cmin)+Θ(α,β)<|θ|4αρ(A)Π(α,β)4[αρ(A)Π(α,β)]2+Υ2(α,β).

    Remark 4.3. Since η=2θθ+2, by simple algebra, it is easy to see that η is real if and only if (θ)=0. Following Remark 4.1, the real eigenvalue η satisfies two cases of forms:

    (i) If v=0 and α2b+4bc=4ad, then u(BT) and the real eigenvalue

    η=2aa+2α2+8c>0

    meets the following inequality

    2aminamin+2α2+8cmax<η<2amaxamax+2α2+8cmin.

    (ii) If v0 and α2b+4bc+αβbσ2=4ad, then u(BT) and the real eigenvalue

    η=2aa+2α2+8c+2αβσ2>0

    is bounded by

    2aminamin+2α2+8cmax+2αβσ2max<η<2amaxamax+2α2+8cmin+2αβσ2min.

    In the following, based on the above descriptions, I will further discuss the algebraic properties of the preconditioned matrix P1SFHSSA, where the preconditioner Pα,0 is a reduced form of (2.1) with β=0. For simplicity of description, we denote PSFHSS and NSFHSS with α2>4|c|max and β=0 by Pα,0 and Nα,0, respectively. For more details on the the algebraic properties of the preconditioned matrix, we refer to [23,24,25].

    Theorem 4.3. Let A be nonsymmetric positive definite and B have full column rank. Then the preconditioned saddle point matrix P1α,0A has an eigenvalue η=2 with algebraic multiplicity at least m and the remaining eigenvalues are ηj=4α[λj(H)+λj(S)][α+2λj(H)][α+2λj(S)](j=1,2,,n), where λj(H) and λj(S) denote the jth eigenvalue of H and S, respectively.

    Proof. If β=0, according to the second of the Eq (3.2), we can obtain

    (λ+1)BTu=0,

    then we further get either λ+1=0 or BTu=0. If λ=1, i.e. η=1λ=2, by the first of the Eq (3.2), we have

    (α2In+4HS)u=0.

    Since α2In+4HS is nonsingular with α2>4|c|max, then it is easy to obtain the related eigenvectors have the form of [0vTl](l=1,2,,m). If BTu=0, use the first of the Eq (3.2) again, we obtain

    λ=u(αIn2H)(αIn2S)uu(αIn+2H)(αIn+2S)u=[α2λ(H)][α2λ(S)][α+2λ(H)][αIn+2λ(S)],

    therefore, we further get

    η=1λ=4α[λ(H)+λ(S)][α+2λ(H)][α+2λ(S)].

    Hence, we complete the proof of Theorem 4.3.

    Remark 4.4. Following Theorem 4.3, it is easy to know that the preconditioned matrix P1α,0A has m+j(1jn) linearly independent eigenvectors, where

    (i) m linearly independent eigenvectors related to the eigenvalue 2 have the form of [0vTl](l=1,2,,m).

    (ii) j(j=1,2,,n) linearly independent eigenvectors associated with eigenvalues unequal to 2 have the form [uTsvTs](s=1,2,,j) with BTu=0.

    In what follows, we devote to study the properties of the minimal polynomial for the preconditioned matrix P1α,0A. which are beneficial to the Krylov subspace acceleration. To derive an expression for the corresponding characteristic polynomial of P1α,0A, we decompose once again the preconditioner Pα,0 as

    Pα,0=14(In02BTF1Im)(F004BTF1B)(In2F1B0Im),

    where

    F=1α(αIn+2H)(αIn+2S).

    It is obvious that

    P1α,0=4(In2F1B0Im)(F100(4BTF1B)1)(In02BTF1Im).

    A simple computation reveals that

    P1α,0A=In+mP1α,0Nα,0=In+m(In2F1B0Im)(F100(4BTF1B)1)×(In02BTF1Im)(G2B2BT0)=In+m(F1G2F1BD0DIm)=(InF1G+F1BD0D2Im),

    where

    G=1α(αIn2H)(αIn2S),

    and

    D=(BTF1B)1(BTF1G+BT).

    Since BTu=0, then for j=1,2,,n, we can get

    ηj(InF1G+2F1BD)=ηj(InF1G)=4α[λj(H)+λj(S)][α+2λj(H)][α+2λj(S)].

    Then the characteristic polynomial of P1α,0A is described as follows

    ΦP1α,0A=det(ηIn+mP1α,0A)=(η2)mnj=1(ηηj).

    Denote by

    Ψ(η)=(η2)mnj=1(ηηj).

    It is straightforward to show that Ψ(η) is a polynomial related to η of degree n+1. Then a simple computation reveals that

    Ψ(P1α,0A)=(P1α,0A2In+m)mnj=1(P1α,0AηjIn+m)=((ΘIn)nj=1(ΘηjIn)0Dnj=1(ΘηjIn)0),

    where Θ=InF1G+F1BD.

    Since ηj(j=1,2,,n) are the eigenvalues of the matrix Θ, following the spirit of the Hamilton-Cayley theorem, it is easy to see that nj=1(ΘηjIn), this leads to Ψ(P1α,0A)=0.

    The following conclusion is direct consequence of the above statements and therefore its proof is omitted.

    Theorem 4.4. Under the assumptions of Theorem 4.3, if P1α,0A has k(1kn) distinct eigenvalues ηj(1jk) related to algebraic multiplicity γj with kj=1=n, respectively, then the degree of the minimal polynomial of the preconditioned matrix P1α,0A is at most k+1(1kn). Thus, the dimension of the Krylov subspace K(P1α,0A,b) is at most k+1(1kn).

    Next, we restrict our attentions to the determination of the optimal parameters problem. It is easy to see that the performance of the PSFHSS preconditioner largely depends on the choices of parameters α and β. However, in our experience this is a difficult task to select the optimal parameters, therefore, we usually need to investigate the estimation method in practical implementations.

    By taking similar steps to those taken in [26], we are ready to consider the choice of the parameters α and β in the SFHSS iteration methods. In order to obtain the fast convergence rate of the SFHSS iteration method (2.2) and clustered eigenvalue distribution of the SFHSS-preconditioned matrix, we usually think it is very important to choice a suitable preconditioner 2PSFHSS in (4.2) to approximate infinitely A, hence, we may expect RSFHSS0 defined as in (4.2) to compute the quasi-optimal iteration parameters αexp and βexp.

    We begin our analysis by minimizing the following Frobenius norm of RSFHSS

    Θ(α,β)4RSFHSSF=(αIn+4αHS00βBTB)F=nα2+4tr(HSSH)16α2tr(HS2H)+β2tr(BTB)2.

    where tr(E) denotes the trace of the matrix E.

    By taking partial derivative for Θ(α,β), we can obtain

    Θ(α,β)α=2nα+32α3tr(HS2H)andΘ(α,β)β=2βtr(BTB)2.

    It is obvious that Θ(α,β) has a minimum if

    αexp=24tr(HS2H)n.

    The proof of tr(HS2H)<0 refer to [26, Lemma 1]. In addition, in practical implementations, it seems a good idea to try some values as close to 0 as possible for the iteration parameter βexp.

    In this section, we present some numerical experiments to test the feasibility and robustness of the generalized shift-HSS iteration method for solving the saddle point problem (1.1) arising from Oseen models of incompressible flow. In order to evaluate the performance of the proposed generalized shift-HSS preconditioner over some existing matrix splitting preconditioners, we compare the numerical results of the generalized shift-HSS preconditioner PSFHSS (2.1) with the GSS preconditioner PGSS (1.3), the SS preconditioner PSS (1.4), the MSS preconditioner PMSS (1.5), the GMSS preconditioner PMSS (1.5), the DPSS preconditioner PDPSS presented in [27,28] and the preconditioner PIDPSS established by [29], where the preconditioner PDPSS is defined by

    PDPSS=12α(αIn+AαIm)(αInBBTαIm). (5.1)

    Here, the optimal iteration parameter is given by αexp=||A||F+2||B||F2(n+m) ([30]). In addition, the preconditioner PIDPSS is given by

    PIDPSS=12α(αIn+A2αIm)(αInBBT), (5.2)

    the optimal iteration parameter is given by αexp=||A||F+||B||F2n ([24]).

    We use the above preconditioners to accelerate GMRES iteration method and compare these different preconditioned GMRES iteration methods in terms of both the number of iteration steps (denoted by IT) and elapsed CPU times in seconds (denoted by CPU). In our implementations, we choose the zero vector x(0)=0 as the initial guess, take the right-hand-side vector b so that the exact solutions u and v are the unity vectors with all entries equal to one, and set the stopping criterion to be the residual norm

    RES=bAx(k)2bAx(0)2<106

    or the prescribed iteration number kmax=n, where x(k) is the solution at the kth iteration. In actual applications, the iteration parameters α and β for the preconditioner PSFHSS are chosen to be the experimentally found optimal value, which leads to the least numbers of iterations of the preconditioned GMRES method for each choice of the spatial mesh-sizes [18].

    Example 5.1. [31]. Consider the Oseen equation of form

    {νu+(v)u+p=f,u=0,inΩ. (5.3)

    where Ω is a bounded domain with suitable boundary conditions, the parameter ν>0 denotes the viscosity, u represents the vector field and stands for the velocity, v is the approximation of u from the previous Picard iteration, and p denotes the pressure. The test problem is the classical two-dimensional leaky-lid driven cavity problem. Here, we usually employ the "IFISS" software package proposed in [32] to discretize the Oseen problem (5.3) with the Q2Q1 mixed finite element method on uniform grid. The generated saddle point system of type (1.1) has nonsymmetric positive definite sub-matrix B which corresponds to a discretization of the convection diffusion operator L[u]:=νu+(v)u. In actual implementation, four values of the viscosity parameters are used, such as ν=1,0.1,0.01,0.001, and four increasing grids are selected, i.e., 16×16, 32×32, 64×64 and 128×128 grids.

    In Tables 17, we use GMRES iteration method in conjunction with the corresponding preconditioners and present IT and CPU with respect to different sizes of the discretization grids for different values of α, β and ν. As these tables show, we can easily know that the generalized shift-HSS method behaves much better than the MSS, GSS, SS, DPSS and IDPSS iteration methods, especially when problem size increases, the convergence rate of GMRES iteration method with the generalized shift-HSS preconditioner are much faster than that of GMRES iteration method with the GMSS, MSS, GSS, SS, DPSS and IDPSS preconditioners. Therefore, the generalized shift-HSS preconditioner is more efficient and stable to accelerate the convergence rate of the GMRES iteration method.

    Table 1.  IT and CPU with ν=1, α=0.01 and β=0.005.
    Grid 64×64 128×128 256×256 512×512
    PSFHSS CPU 2.2468 19.5751 159.8347 978.9446
    IT 4 4 4 4
    PSS CPU 7.4697 127.3370 1.6111e+03 2.0024e+04
    IT 47 169 487 998
    PGSS CPU 6.1840 88.7871 1.2920e+03 8.1583e+04
    IT 36 116 392 4550
    PMSS CPU 6.2798 78.7841 660.9457 1.4831e+04
    IT 37 104 200 817
    PGMSS CPU 7.0076 86.0697 807.2316 1.1505e+04
    IT 42 113 244 631
    PDPSS CPU 81.6477 993.7621 7.5473e+03 1.0011e+05
    IT 260 573 627 1121
    PIDPSS CPU 50.0780 369.2814 2.9020e+03 1.8826e+04
    IT 164 216 243 200

     | Show Table
    DownLoad: CSV
    Table 2.  IT and CPU with ν=0.1, α=0.01 and β=0.005.
    Grid 16×16 32×32 64×64 128×128
    PSFHSS CPU 0.0560 0.3080 2.4352 20.0278
    IT 3 3 4 4
    PSS CPU 0.0755 0.6873 7.9002 128.0045
    IT 11 20 47 169
    PGSS CPU 0.0590 0.4890 6.1251 89.8709
    IT 7 15 36 116
    PMSS CPU 0.0807 0.6550 6.6287 80.3747
    IT 13 17 37 104
    PGMSS CPU 0.0841 0.8567 7.4521 86.2577
    IT 14 24 42 113
    PDPSS CPU 0.9915 8.5094 83.2322 998.5375
    IT 87 180 255 573
    PIDPSS CPU 0.8671 8.6078 51.8677 373.8926
    IT 77 140 164 216

     | Show Table
    DownLoad: CSV
    Table 3.  IT and CPU with ν=0.01, α=0.01 and β=0.005.
    Grid 16×16 32×32 64×64 128×128
    PSFHSS CPU 0.0690 0.3106 2.5461 19.4539
    IT 3 3 4 4
    PSS CPU 0.1031 0.7981 7.7896 130.0777
    IT 11 21 47 169
    PGSS CPU 0.0499 0.5213 5.9485 90.4792
    IT 8 15 36 116
    PMSS CPU 0.0950 0.5529 5.9219 83.7678
    IT 13 17 37 104
    PGMSS CPU 0.0903 0.7730 6.6637 89.1844
    IT 14 24 42 113
    PDPSS CPU 1.0676 9.0995 79.1151 1.0232e+03
    IT 87 150 260 573
    PIDPSS CPU 0.9469 8.0854 49.9778 372.0878
    IT 77 140 164 216

     | Show Table
    DownLoad: CSV
    Table 4.  IT and CPU with ν=1, α=αexp and β=0.05.
    Grid 16×16 32×32 64×64 128×128
    PSFHSS CPU 0.1059 0.4267 2.5201 20.0278
    IT 4 4 4 4
    PSS CPU 0.6867 11.1614 52.6255 913.3530
    IT 52 152 163 515
    PGSS CPU 0.2370 1.1227 4.5158 21.7143
    IT 18 15 13 13

     | Show Table
    DownLoad: CSV
    Table 5.  IT and CPU with ν=0.001, α=0.01 and β=0.005.
    Grid 16×16 32×32 64×64 128×128
    PSFHSS CPU 0.0794 0.3281 2.4645 19.4732
    IT 3 3 4 4
    PSS CPU 0.0876 0.7553 7.6986 128.4442
    IT 11 21 47 169
    PGSS CPU 0.0620 0.5892 6.1956 89.6522
    IT 8 15 36 116
    PMSS CPU 0.0923 0.5876 5.9810 79.4758
    IT 15 17 37 104
    PGMSS CPU 0.0989 0.8472 7.0856 87.2737
    IT 14 24 42 113
    PDPSS CPU 1.0748 9.1363 81.7333 1.0026e+03
    IT 87 150 260 573
    PIDPSS CPU 0.9799 8.5531 51.4426 367.3563
    IT 77 140 164 216

     | Show Table
    DownLoad: CSV
    Table 6.  IT and CPU with ν=0.1, α=αexp and β=0.05.
    Grid 16×16 32×32 64×64 128×128
    PSFHSS CPU 0.0975 0.4061 2.4989 19.6638
    IT 4 4 4 4
    PSS CPU 0.6554 10.7887 52.3130 940.0833
    IT 52 152 163 510
    PGSS CPU 0.2531 1.1453 4.1881 22.5455
    IT 18 15 13 13

     | Show Table
    DownLoad: CSV
    Table 7.  IT and CPU with ν=0.01, α=αexp and β=0.001.
    Grid 16×16 32×32 64×64 128×128
    PSFHSS CPU 0.0595 0.2582 1.5179 12.3510
    IT 2 2 2 4
    PSS CPU 0.6741 11.4814 53.4351 939.0087
    IT 52 152 163 525
    PGSS CPU 0.2454 1.1735 4.6387 22.7694
    IT 18 15 13 13

     | Show Table
    DownLoad: CSV

    From Figures 14, we give the convergence history of the corresponding iteration methods to compare effects of the corresponding preconditioners with respect to the iteration parameters α and β. It is easily seen that the generalized shift-HSS iteration method has more smooth convergence curves than the GMSS, MSS, GSS and SS iteration methods.

    Figure 1.  Convergence curves of GMRES iteration methods.
    Figure 2.  Convergence curves of GMRES iteration methods.
    Figure 3.  Convergence curves of GMRES iteration methods.
    Figure 4.  Convergence curves of GMRES iteration methods.

    Example 5.2. [33,34]. Consider the two-dimensional convection-diffusion equation

    2u+qu=f(x,y)inΩ=[0,1]×[0,1], (5.4)

    with Dirichlet boundary condition and the constant coefficient q. Similar to the three-dimensional case proposed in [35], the five-point centered finite difference discretization is used for the above equation, the nonsymmetric saddle point system (1.1) can be easily obtained, where

    A=(IlTr+TrIl00IlTr+TrIl)R2l2×2l2,
    B=(IlFFIl)R2l2×l2,

    and

    Tr=1h2tridiag(1r,2,1+r)Rl×l,F=1htridiag(1,1,0)Rl×l.

    Here, h=1l+1 represents an equidistant step-size in each coordinate direction, denotes the Kronecker product and r=qh/2 indicates the mesh Reynolds number.

    In Tables 811, from two aspects of IT and CPU, we use SFHSS, DPSS and IDPSS preconditioners to accelerate GMRES iteration method associated with different sizes of the discretization grids for different values of q with αexp and β=0.00001. As these tables show, we can easily know that the SFHSS method outperforms the DPSS and IDPSS methods, especially when problem size increases, the convergence rate of GMRES iteration method with the generalized shift-HSS preconditioner are much faster than that of GMRES iteration method with the DPSS and IDPSS preconditioners. Therefore, the generalized SFHSS preconditioner is more efficient and stable.

    Table 8.  IT and CPU with q=0.01, α=αexp and β=0.00001.
    l 16 32 64 128
    PSFHSS CPU 0.0430 0.1943 1.1330 6.9972
    IT 3 4 4 5
    PSS CPU 0.5416 6.8962 142.8853 743.6954
    IT 68 127 650 680
    PGSS CPU 0.0949 0.4838 2.1202 10.0982
    IT 10 10 9 8

     | Show Table
    DownLoad: CSV
    Table 9.  IT and CPU with q=0.1, α=αexp and β=0.00001.
    l 16 32 64 128
    PSFHSS CPU 0.0405 0.1926 1.1506 6.8099
    IT 4 4 4 5
    PSS CPU 0.5003 5.7164 62.4237 807.8969
    IT 66 152 341 670
    PGSS CPU 0.0804 0.3724 1.8443 8.7051
    IT 10 10 9 8

     | Show Table
    DownLoad: CSV
    Table 10.  IT and CPU with q=1, α=αexp and β=0.00001.
    l 16 32 64 128
    PSFHSS CPU 0.0405 0.2352 1.3521 7.8656
    IT 4 4 4 5
    PSS CPU 0.7747 12.3715 151.9526 988.8506
    IT 86 258 694 840
    PGSS CPU 0.0899 0.5204 2.2006 9.8894
    IT 10 10 9 8

     | Show Table
    DownLoad: CSV
    Table 11.  IT and CPU with q=10, α=αexp and β=0.00001.
    l 16 32 64 128
    PSFHSS CPU 0.0650 0.2451 1.3483 7.9823
    IT 4 4 4 5
    PSS CPU 0.6067 11.2988 165.5783 2.1922e+03
    IT 74 273 761 1844
    PGSS CPU 0.0964 0.4138 2.1823 17.0936
    IT 12 10 9 8

     | Show Table
    DownLoad: CSV

    The novelty of this present paper is the construction and analysis of the generalized shift-HSS iteration method for nonsingular saddle point systems with nonsymmetric positive definite (1, 1)-block. We investigate the convergence property of the SFHSS iteration method and further illustrate the robustness and efficiency of the generalized shift-HSS preconditioner by a numerical example. Future work should focus on developing the modified forms of the GSS iteration method and generalized shift GSOR-like method for complex symmetric linear system, and study the effects of iteration parameters on eigenvalue-clustering of the corresponding preconditioned matrices.

    This research is supported by Scientific and Technological Research Program of Chongqing Research Program of Basic Research and Frontier Technology (cstc2018jcyjAX0685), Xiangsi Lake Young Scholars Innovation Team of Guangxi Minzu University (No.2021RSCXSHQN05) and Natural Science Foundation of Guangxi Minzu University (2021KJQD01).

    The authors declared that they have no conflicts of interest to this work.



    [1] J. M. Mendel, R. I. John, F. L. Liu, Interval type-2 fuzzy logic systems made simple, IEEE T. Fuzzy Sys., 14 (2006), 808–821. https://doi.org/10.1109/TFUZZ.2006.879986 doi: 10.1109/TFUZZ.2006.879986
    [2] D. R. Wu, J. M. Mendel, Uncertainty measures for interval type-2 fuzzy sets, Inform. Sci., 177 (2007), 5378–5393. https://doi.org/10.1016/j.ins.2007.07.012 doi: 10.1016/j.ins.2007.07.012
    [3] A. Khosravi, S. Nahavandi, Load forecasting using interval type-2 fuzzy logic systems: Optimal type reduction, IEEE T. Indust. Inform., 10 (2014), 1055–1063. https://doi.org/10.1109/TII.2013.2285650 doi: 10.1109/TII.2013.2285650
    [4] J. M. Mendel, F. L. Liu, D. Y. Zhai, Alpha-plane representation for type-2 fuzzy sets: Theory and applications, IEEE T. Fuzzy Syst., 17 (2009), 1189–1207. https://doi.org/10.1109/TFUZZ.2009.2024411 doi: 10.1109/TFUZZ.2009.2024411
    [5] C. Wagner, H. Hagras, Toward general type-2 fuzzy logic systems based on zSlices, IEEE T. Fuzzy Syst., 18 (2010), 637–660. https://doi.org/10.1109/TFUZZ.2010.2045386 doi: 10.1109/TFUZZ.2010.2045386
    [6] F. L. Liu, An efficient centroid type-reduction strategy for general type-2 fuzzy logic system, Inf. Sci., 178 (2008), 2224–2236. https://doi.org/10.1016/j.ins.2007.11.014 doi: 10.1016/j.ins.2007.11.014
    [7] J. M. Mendel, General type-2 fuzzy logic systems made simple: A tutorial, IEEE T. Fuzzy Syst., 22 (2014), 1162–1182. https://doi.org/10.1109/TFUZZ.2013.2286414 doi: 10.1109/TFUZZ.2013.2286414
    [8] C. I. Gonzalez, P. Melin, J. R. Castro, O. Mendoza, O. Castillo, An improved sobel edge detection method based on generalized type-2 fuzzy logic, Soft Comput., 20 (2016), 773–784. https://doi.org/10.1007/s00500-014-1541-0 doi: 10.1007/s00500-014-1541-0
    [9] P. Melin, C. I. Gonzalez, J. R. Castro, O. Mendoza, O. Castillo, Edge-detection method for image processing based on generalized type-2 fuzzy logic, IEEE T. Fuzzy Syst., 22 (2014), 1515–1525. https://doi.org/10.1109/TFUZZ.2013.2297159 doi: 10.1109/TFUZZ.2013.2297159
    [10] M. A. Sanchez, O. Castillo, J. R. Castro, Generalized type-2 fuzzy systems for controlling a mobile robot and a performance comparison with interval type-2 and type-1 fuzzy systems, Expert Syst. Appl., 42 (2015), 5904–5914. https://doi.org/10.1016/j.eswa.2015.03.024 doi: 10.1016/j.eswa.2015.03.024
    [11] O. Castillo, L. Amador-Angulo, J. R. Castro, M. Garcia-Valdez, A comparative study of type-1 fuzzy logic systems, interval type-2 fuzzy logic systems and generalized type-2 fuzzy logic systems in control problems, Inform. Sci., 354 (2016), 257–274. https://doi.org/10.1016/j.ins.2016.03.026
    [12] Y. Chen, D. Z. Wang, W. Ning, Forecasting by TSK general type-2 fuzzy logic systems optimized with genetic algorithms, Opt. Contr. Appl. Meth., 39 (2018), 393–409. https://doi.org/10.1002/oca.2353 doi: 10.1002/oca.2353
    [13] Y. Chen, C. X. Li, J. X. Yang, Design and application of Nagar-Bardini structure-based interval type-2 fuzzy logic systems optimized with the combination of backpropagation algorithms and recursive least square algorithms, Expert Syst. Appl., 211 (2023), 1–10. https://doi.org/10.1016/J.ESWA.2022.118596 doi: 10.1016/J.ESWA.2022.118596
    [14] Y. Chen, D. Z. Wang, Forecasting by general type-2 fuzzy logic systems optimized with QPSO algorithms, Int. J. Contr. Auto. Syst., 15 (2017), 2950–2958. https://doi.org/10.1007/s12555-017-0793-0 doi: 10.1007/s12555-017-0793-0
    [15] L. X. Wang, A new look at type-2 fuzzy sets and type-2 fuzzy logic systems, IEEE T. Fuzzy Syst., 25 (2017), 693–706. https://doi.org/10.1109/tfuzz.2016.2543746 doi: 10.1109/tfuzz.2016.2543746
    [16] D. R. Wu, J. M. Mendel, Enhanced Karnik-Mendel algorithms, IEEE T. Fuzzy Syst., 17 (2009), 923–934. https://doi.org/10.1109/TFUZZ.2008.924329 doi: 10.1109/TFUZZ.2008.924329
    [17] J. M. Mendel, On KM algorithms for solving type-2 fuzzy sets problems, IEEE T. Fuzzy Syst., 21 (2013), 426–446. https://doi.org/10.1109/TFUZZ.2012.2227488 doi: 10.1109/TFUZZ.2012.2227488
    [18] X. W. Liu, J. M. Mendel, D. R. Wu, Study on enhanced Karnik-Mendel algorithms: initialization explanations and computation improvements, Inform. Sci., 184 (2012), 75–91. https://doi.org/10.1016/j.ins.2011.07.042 doi: 10.1016/j.ins.2011.07.042
    [19] X. L. Liu, S. P. Wan, Combinatorial iterative algorithms for computing the centroid of an interval type-2 fuzzy set, IEEE T. Fuzzy Syst., 28 (2020), 607–617. https://doi.org/10.1109/TFUZZ.2019.2911918 doi: 10.1109/TFUZZ.2019.2911918
    [20] J. M. Mendel, X. W. Liu, Simplified interval type-2 fuzzy logic systems, IEEE T. Fuzzy Syst., 21 (2013), 1056–1069. https://doi.org/10.1109/TFUZZ.2013.2241771 doi: 10.1109/TFUZZ.2013.2241771
    [21] C. Chen, R. John, J. Twycross, J. M. Garibaldi, A direct approach for determining the switch points in the Karnik-Mendel algorithm, IEEE T. Fuzzy Syst., 26 (2018), 1079–1085. https://doi.org/10.1109/tfuzz.2017.2699168 doi: 10.1109/tfuzz.2017.2699168
    [22] Z. Zhang, X. Zhao, Y. Qin, H. Si, L. Zhou, Interval type-2 fuzzy TOPSIS approach with utility theory for subway station operational risk evaluation, J. Ambient Intel. Human. comput., 13 (2022), 4849–4863. https://doi.org/10.1007/s12652-021-03182-0 doi: 10.1007/s12652-021-03182-0
    [23] X. L. Liu, Y. C. Lin, New efficient algorithms for the centroid of an interval type-2 fuzzy set, Inform. Sci., 570 (2021), 1–19. https://doi.org/10.1016/j.ins.2021.04.032 doi: 10.1016/j.ins.2021.04.032
    [24] Y. Chen, J. X. Wu, J. Lan, Study on reasonable initialization enhanced Karnik-Mendel algorithms for centroid type-reduction of interval type-2 fuzzy logic systems, AIMS Math., 5 (2020), 6149–6168. https://doi.org/10.3934/math.2020395 doi: 10.3934/math.2020395
    [25] J. H. Wang, W. Ji, X. K. Fang, S. S. Gu, Improvement of enhanced Karnik-Mendel algorithm for interval type-2 fuzzy sets, Contr. Deci., 28 (2013), 1165–1172. https://doi.org/10.13195/j.kzyjc.2013.08.002 doi: 10.13195/j.kzyjc.2013.08.002
    [26] Y. Chen, D. Z. Wang, S. C. Tong, Forecasting studies by designing Mamdani interval type-2 fuzzy logic systems: With combination of BP algorithms and KM algorithms, Neurocomputing, 174 (2016), 1133–1146. https://doi.org/10.1016/j.neucom.2015.10.032 doi: 10.1016/j.neucom.2015.10.032
    [27] Y. Chen, D. Z. Wang, Forecasting by designing Mamdani general type-2 fuzzy logic systems optimized with quantum particle swarm optimization algorithms, Trans. Inst. Meas. Contr., 41 (2019), 2886–2896. https://doi.org/10.1177/0142331218816753 doi: 10.1177/0142331218816753
    [28] G. M. Méndez, M. D. L. A. Hernandez, Hybrid learning mechanism for interval A2-C1 type-2 non-singleton type-2 Takagi-Sugeno-Kang fuzzy logic systems, Inform. Sci., 220 (2013), 149–169. https://doi.org/10.1016/j.ins.2012.01.024 doi: 10.1016/j.ins.2012.01.024
    [29] Y. Chen, J. X. Yang, C. X. Li, Design of Takagi Sugeno Kang type interval type-2 fuzzy logic systems optimized with hybrid algorithms, Int. J. Fuzzy Syst., 25 (2023), 868–879. https://doi.org/10.1007/S40815-022-01410-Z doi: 10.1007/S40815-022-01410-Z
    [30] Y. Chen, Study on non-iterative algorithms for center-of-sets type-reduction of Takagi-Sugeno-Kang type general type-2 fuzzy logic systems, Compl. Intell. Syst., 9 (2023), 4015–4023. https://doi.org/10.1007/s40747-022-00927-y doi: 10.1007/s40747-022-00927-y
    [31] Y. Chen, Study on centroid type-reduction of general type-2 fuzzy logic systems with sensible beginning weighted enhanced Karnik-Mendel algorithms, Soft Comput., 27 (2023), 9261–9279. https://doi.org/10.1007/S00500-023-08269-8 doi: 10.1007/S00500-023-08269-8
    [32] Y. Chen, J. X. Yang, C. X. Li, Design of reasonable initialization weighted enhanced Karnik-Mendel algorithms for centroid type-reduction of interval type-2 fuzzy logic systems, AIMS Math., 7 (2022), 9846–9870. https://doi.org/10.3934/math.2022549 doi: 10.3934/math.2022549
    [33] Y. Chen, D. Z. Wang, Study on centroid type-reduction of general type-2 fuzzy logic systems with weighted Nie-Tan algorithms, Soft Comput., 22 (2018), 7659–7678. https://doi.org/10.1007/s00500-018-3551-9 doi: 10.1007/s00500-018-3551-9
    [34] Y. Chen, D. Z. Wang, Study on centroid type-reduction of general type-2 fuzzy logic systems with weighted enhanced Karnik-Mendel algorithms, Soft Comput., 22 (2018), 1361–1380. https://doi.org/10.1007/s00500-017-2938-3 doi: 10.1007/s00500-017-2938-3
    [35] Y. Chen, Study on centroid type-reduction of general type-2 fuzzy logic systems with sensible beginning weighted enhanced Karnik-Mendel algorithms, Soft Comput., 27 (2023), 9261–9279. https://doi.org/10.1007/S00500-023-08269-8 doi: 10.1007/S00500-023-08269-8
    [36] D. R. Wu, Approaches for reducing the computational cost of interval type-2 fuzzy logic systems: Overview and comparisons, IEEE T. Fuzzy Syst., 21 (2013), 80–99. https://doi.org/10.1109/TFUZZ.2012.2201728 doi: 10.1109/TFUZZ.2012.2201728
    [37] J. W. Li, R. John, S. Coupland, G. Kendall, On Nie-Tan operator and type-reduction of interval type-2 fuzzy sets, IEEE T. Fuzzy Syst., 26 (2018), 1036–1039. https://doi.org/10.1109/TFUZZ.2017.2666842 doi: 10.1109/TFUZZ.2017.2666842
    [38] S. Greenfield, F. Chiclana, Accuracy and complexity evaluation of defuzzification strategies for the discretised interval type-2 fuzzy set, Int. J. Approx. Reason., 54 (2013), 1013–1033. https://doi.org/10.1016/j.ijar.2013.04.013 doi: 10.1016/j.ijar.2013.04.013
    [39] S. Greenfield, F. Chiclana, S. Coupland, R. John, The collapsing method of defuzzification for discretised interval type-2 fuzzy sets, Inform. Sci., 179 (2009), 2055–2069. https://doi.org/10.1016/j.ins.2008.07.011 doi: 10.1016/j.ins.2008.07.011
    [40] Y. Chen, Study on weighted-based noniterative algorithms for computing the centroids of general type-2 fuzzy sets, Int. J. Fuzzy Syst., 24 (2022), 587–606. https://doi.org/10.1007/S40815-021-01166-Y doi: 10.1007/S40815-021-01166-Y
    [41] C. Chen, D. Wu, J. M. Garibaldi, R. I. John, J. Twycross, J. M. Mendel, A comprehensive study of the efficiency of type-reduction algorithms, IEEE T. Fuzzy Syst., 29 (2020), 1556–1566. https://doi.org/10.1109/TFUZZ.2020.2981002 doi: 10.1109/TFUZZ.2020.2981002
    [42] M. A. Khanesar, A. Jalalian, O. Kaynak, H. Gao, Improving the speed of center of sets type reduction in interval type-2 fuzzy systems by eliminating the need for sorting, IEEE T. Fuzzy Syst., 25 (2017), 1193–1206. https://doi.org/10.1109/TFUZZ.2016.2602392 doi: 10.1109/TFUZZ.2016.2602392
    [43] Y. Chen, C. X. Li, J. X. Yang, Design of discrete noniterative algorithms for center-of-sets type reduction of general type-2 fuzzy logic systems, Int. J. Fuzzy Syst., 24 (2022), 2024–2035. https://doi.org/10.1007/S40815-022-01256-5 doi: 10.1007/S40815-022-01256-5
    [44] F. Gaxiola, P. Melin, F. Valdez, J. R. Castro, O. Castillo, Optimization of type-2 fuzzy weights in backpropagation learning for neural networks using GAs and PSO, Appl. Soft Comput., 38 (2016), 860–871. https://doi.org/10.1016/j.asoc.2015.10.027 doi: 10.1016/j.asoc.2015.10.027
    [45] H. Mo, F. Y. Wang, M. Zhou, R. Li, Z. Xiao, Footprint of uncertainty for type-2 fuzzy sets, Inform. Sci., 272 (2014), 96–110. https://doi.org/10.1016/j.ins.2014.02.092 doi: 10.1016/j.ins.2014.02.092
    [46] F. Y. Wang, H. Mo, Some fundamental issues on type-2 fuzzy sets, Acta Auto. Sin., 43 (2017), 1114–1141. https://doi.org/10.16383/j.aas.2017.c160638 doi: 10.16383/j.aas.2017.c160638
    [47] C. H. Hsu, C. F. Juang, Evolutionary robot wall-following control using type-2 fuzzy controller with species-de-activated continuous ACO, IEEE T. Fuzzy Syst., 21 (2013), 100–112. https://doi.org/10.1109/TFUZZ.2012.2202665 doi: 10.1109/TFUZZ.2012.2202665
    [48] N. Mansoureh, F. Z. M. Hossein, B. Susan, A multilayer general type-2 fuzzy community detection model in large-scale social networks, IEEE T. Fuzzy Syst., 30 (2022), 4494–4503. https://doi.org/10.1109/TFUZZ.2022.3153745 doi: 10.1109/TFUZZ.2022.3153745
    [49] E. Ontiveros, P. Melin, O. Castillo, Comparative study of interval type-2 and general type-2 fuzzy systems in medical diagnosis, Inform. Sci., 525 (2020), 37–53. https://doi.org/10.1016/j.ins.2020.03.059 doi: 10.1016/j.ins.2020.03.059
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(797) PDF downloads(33) Cited by(1)

Figures and Tables

Figures(6)  /  Tables(10)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog