Research article

Study on reasonable initialization enhanced Karnik-Mendel algorithms for centroid type-reduction of interval type-2 fuzzy logic systems

  • Type-reduction (TR) is a key block for interval type-2 fuzzy logic systems (IT2 FLSs). In general, Karnik-Mendel (KM) (or enhanced Karnik-Mendel (EKM)) algorithms are used to perform the TR. These two types of algorithms have the advantage of preserving the uncertainties of membership functions (MFs) flow in IT2 FLSs. This paper gives the initialization explanations of KM and EKM algorithms, and proposes reasonable initialization enhanced Karnik-Mendel (RIEKM) algorithms for centroid TR of IT2 FLSs. By considering the accurate continuous Nie-Tan (CNT) algorithms as the benchmark, four computer simulation examples are adopted to illustrate and analyze the performances of RIEKM algorithms for solving the centroid TR and defuzzification of IT2 FLSs. Compared with the EKM algorithms, the proposed RIEKM algorithms have smaller absolute errors and faster convergence speeds, which afford the potential value for designing and applying IT2 FLSs.

    Citation: Yang Chen, Jinxia Wu, Jie Lan. Study on reasonable initialization enhanced Karnik-Mendel algorithms for centroid type-reduction of interval type-2 fuzzy logic systems[J]. AIMS Mathematics, 2020, 5(6): 6149-6168. doi: 10.3934/math.2020395

    Related Papers:

    [1] Chantapish Zamart, Thongchai Botmart, Wajaree Weera, Prem Junsawang . Finite-time decentralized event-triggered feedback control for generalized neural networks with mixed interval time-varying delays and cyber-attacks. AIMS Mathematics, 2023, 8(9): 22274-22300. doi: 10.3934/math.20231136
    [2] Huahai Qiu, Li Wan, Zhigang Zhou, Qunjiao Zhang, Qinghua Zhou . Global exponential periodicity of nonlinear neural networks with multiple time-varying delays. AIMS Mathematics, 2023, 8(5): 12472-12485. doi: 10.3934/math.2023626
    [3] Nayika Samorn, Kanit Mukdasai, Issaraporn Khonchaiyaphum . Analysis of finite-time stability in genetic regulatory networks with interval time-varying delays and leakage delay effects. AIMS Mathematics, 2024, 9(9): 25028-25048. doi: 10.3934/math.20241220
    [4] Sunisa Luemsai, Thongchai Botmart, Wajaree Weera, Suphachai Charoensin . Improved results on mixed passive and H performance for uncertain neural networks with mixed interval time-varying delays via feedback control. AIMS Mathematics, 2021, 6(3): 2653-2679. doi: 10.3934/math.2021161
    [5] R. Sriraman, P. Vignesh, V. C. Amritha, G. Rachakit, Prasanalakshmi Balaji . Direct quaternion method-based stability criteria for quaternion-valued Takagi-Sugeno fuzzy BAM delayed neural networks using quaternion-valued Wirtinger-based integral inequality. AIMS Mathematics, 2023, 8(5): 10486-10512. doi: 10.3934/math.2023532
    [6] Li Wan, Qinghua Zhou, Hongbo Fu, Qunjiao Zhang . Exponential stability of Hopfield neural networks of neutral type with multiple time-varying delays. AIMS Mathematics, 2021, 6(8): 8030-8043. doi: 10.3934/math.2021466
    [7] Zhigang Zhou, Li Wan, Qunjiao Zhang, Hongbo Fu, Huizhen Li, Qinghua Zhou . Exponential stability of periodic solution for stochastic neural networks involving multiple time-varying delays. AIMS Mathematics, 2024, 9(6): 14932-14948. doi: 10.3934/math.2024723
    [8] Qinghua Zhou, Li Wan, Hongbo Fu, Qunjiao Zhang . Exponential stability of stochastic Hopfield neural network with mixed multiple delays. AIMS Mathematics, 2021, 6(4): 4142-4155. doi: 10.3934/math.2021245
    [9] Jenjira Thipcha, Presarin Tangsiridamrong, Thongchai Botmart, Boonyachat Meesuptong, M. Syed Ali, Pantiwa Srisilp, Kanit Mukdasai . Robust stability and passivity analysis for discrete-time neural networks with mixed time-varying delays via a new summation inequality. AIMS Mathematics, 2023, 8(2): 4973-5006. doi: 10.3934/math.2023249
    [10] Biwen Li, Yibo Sun . Stability analysis of Cohen-Grossberg neural networks with time-varying delay by flexible terminal interpolation method. AIMS Mathematics, 2023, 8(8): 17744-17764. doi: 10.3934/math.2023906
  • Type-reduction (TR) is a key block for interval type-2 fuzzy logic systems (IT2 FLSs). In general, Karnik-Mendel (KM) (or enhanced Karnik-Mendel (EKM)) algorithms are used to perform the TR. These two types of algorithms have the advantage of preserving the uncertainties of membership functions (MFs) flow in IT2 FLSs. This paper gives the initialization explanations of KM and EKM algorithms, and proposes reasonable initialization enhanced Karnik-Mendel (RIEKM) algorithms for centroid TR of IT2 FLSs. By considering the accurate continuous Nie-Tan (CNT) algorithms as the benchmark, four computer simulation examples are adopted to illustrate and analyze the performances of RIEKM algorithms for solving the centroid TR and defuzzification of IT2 FLSs. Compared with the EKM algorithms, the proposed RIEKM algorithms have smaller absolute errors and faster convergence speeds, which afford the potential value for designing and applying IT2 FLSs.


    Problems of artificial intelligence (AI) can involve complex data or tasks; consequently neural networks (NNs) as in [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36] can be beneficial to overcome the design AI functions manually. Knowledge of NNs has been applied in various fields, including biology, artificial intelligence, static image processing, associative memory, electrical engineering and signal processing. The connectivity of the neurons is biologically weighted. Weighting reflects positive excitatory connections while a negative value inhibits the connection.

    Activation functions will determine the outcome of models of learning and depth accuracy in the calculation of the training model which can make or break a large NN. Activation functions are also important in determining the ability of NNs regarding convergence speed and convergence, or in some cases, the activation may prevent convergence in the first place as reported in [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28]. NNs are used in processing units and learning algorithms. Time-delay is one of the common distinctive actions in the operation of neurons and plays an important role in causing low levels of efficiency and stability, and may lead to dynamic behavior involving chaos, uncertainty and differences as in [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25]. Therefore, NNs with time delay have received considerable attention in many fields, as in [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25].

    It is well known that many real processes often depend on delays whereby the current state of affairs depends on previous states. Delays often occur in many control systems, for example, aircraft control systems, biological modeling, chemicals or electrical networks. Time-delay is often the main source of ambivalence and poor performance of a system.

    There are two different kinds of time-delay system stability: delay dependent and delay independent. Delayed dependent conditions are often less conservative than independent delays, especially when the delay times are relatively small. The delayed security conditions depend mainly on the highest estimate and the extent of the delay allowed. The delay-dependent stability for interval time-varying delay has been broadly studied and adapted in various research fields in [3,13,14,15,16,19,22,23,24,28]. Time-delay that varies the interval for which the scope is limited is called interval time-varying delay. Some researchers have reported on NN problems with interval time-varying delay as in [1,2,3,4,5,7,11,12,13,14,15,21,25], while [16] reported on NN stability with additive time-varying delay.

    There are two types of stability over a finite time interval, namely finite-time stability and fixed-time stability. With finite-time stability, the system converges in a certain period for any default, while with fixed-time stability, the convergence time is the same for all defaults within the domain. Both finite-time stability and fixed-time stability have been extensively adapted in many fields such as [26,29,30,31,32,33,34,35,37,38]. In [34], J. Puangmalai and et. al. investigated Finite-time stability criteria of linear system with non-differentiable time-varying delay via new integral inequality based on a free-matrix for bounding the integral ba˙zT(s)M˙z(s)ds and obtained the new sufficient conditions for the system in the forms of inequalities and linear matrix inequalities. The finite-time stability criteria of neutral-type neural networks with hybrid time-varying delays was studied by using the definition of finite-time stability, Lyapunov function method and the bounded of inequality techniques, see in [37]. Similarly, in [38], M. Zheng and et. al. studied the finite-time stability and synchronization problems of memristor-based fractional-order fuzzy cellular neural network. By applying the existence and uniqueness of the Filippov solution of the network combined with the Banach fixed point theorem, the definition of finite-time stability of the network and Gronwall-Bellman inequality and designing a simple linear feedback controller.

    Stability analysis in the context of time-delay systems usually applies the appropriate Lyapunov-Krasovskii functional (LKF) technique in [1,2,3,4,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,34,36,39], estimating the upper bounds of its derivative according to the trajectories of the system. Triple and fourth integrals may be useful in the LKF to solve the solution as in [1,2,5,8,10,11,12,13,16,18,19,23,25,36,39]. Many techniques have been applied to approximate the upper bounds of the LKF derivative, such as Jensen inequality [1,2,5,6,8,11,18,19,24,25,28,34,36,39], Wirtinger-based integral inequality [4,10], tighter inequality lemma [20], delay-dependent stability [3,13,14,15,16,19,22,23,24,28], delay partitioning method [9,15,27], free-weighting matrix variables method [1,10,15,17,18,23,26,34], positive diagonal matrix [2,5,6,8,10,11,12,13,16,17,19,25,27,28] and linear matrix inequality (LMI) techniques [1,3,8,9,11,12,13,15,21,23,24,26,28,39] and other techniques [9,13,14,16,18,36]. In [4], H. B. Zeng investigated stability and dissipativity analysis for static neural networks (NNs) with interval time-varying delay via a new augmented LKF by applying Wirtinger-based inequality. In [6], Z.-M. Gao and et. al proposed the stability problem for the neural networks with time-varying delay via new LKF where the time delay needs to be differentiable.

    Based on the above, the topic of finite-time exponential stability criteria of NNs was investigated using non-differentiable time-variation. As a first effort, this article addresses the issue and it main contributions are:

    -We introduce a new argument of LKF V1(t,xt)=xT(t)P1x(t)+2xT(t)P2tth2x(s)ds+(tth2x(s)ds)TP3tth2x(s)ds+2xT(t)P40h2tt+sx(δ)dδds+2(tth2x(s)ds)TP50h2tt+sx(δ)dδds+(0h2tt+sx(δ)dδds)TP60h2tt+sx(δ)dδds to analyze the problem of finite-time stability criteria of NNs. The augmented Lyapunov matrices Pi,  i=1,2,3,4,5,6 do not to be positive definiteness.

    -To apply to finite-time stability problems of NNs, the time-varying delay is non-differentiable which is different from the time-delay cases in [1,2,3,4,5,6,7,15,20].

    -To illustrate the effectiveness of this research as being much less conservative than the finite-time stability criteria in [1,2,3,4,5,6,7,15,20] as shown in numerical examples.

    To improve the new LKF with its triple integral, consisting of utilizing Jensen’s and a new inequality from [34] and the corollary from [39], an action neural function and positive diagonal matrix, without free-weighting matrix variables and with finite-time stability. Some novel sufficient conditions are obtained for the finite-time stability of NNs with time-varying delays in terms of linear matrix inequalities (LMIs). Finally, numerical examples are provided to show the benefit of using the new LKF approach. To the best of our knowledge, to date, there have been no publications involving the problem finite-time exponential stability of NNs.

    The rest of the paper is arranged as follows. Section 2 supplies the considered network and suggests some definitions, propositions and lemmas. Section 3 presents the finite-time exponential stability of NNs with time-varying delay via the new LKF method. Two numerical examples with theoretical results and conclusions are provided in Sections 4 and 5, respectively.

    This paper will use the notations as follows: R stands for the sets of real numbers; Rn means the ndimensional space; Rm×n is the set of all m×n real matrix; AT and A1 signify the transpose and the inverse of matrices A, respectively; A is symmetric if A=AT; If A and B are symmetric matrices, A>B means that AB is positive definite matrix; I means the properly dimensioned identity matrix. The symmetric term in the matrix is determined by ; and sym{A}=A+AT; Block of diagonal matrix is defined by diag{...}.

    Let us consider the following neural network with time-varying delays:

    ˙x(t)=Ax(t)+Bf(Wx(t))+Cg(Wx(th(t))),x(t)=ϕ(t),  t[h2,0],                                     (2.1)

    where x(t)=[x1(t),x2(t),...,xn(t)]T denotes the state vector with the n neurons; A=diag{a1,a2,...,an}>0 is a diagonal matrix; B and C are the known real constant matrices with appropriate dimensions; f(W(.))=[f1(W1x(.)),f2(W2x(.)),...,fn(Wnx(.))] and g(W(.))=[g1(W1x(.)),g2(W2x(.)),...,gn(Wnx(.))] denote the neural activation functions; W=[WT1,WT2,...,WTn] is delayed connection weight matrix; ϕ(t)C[[h2,0],Rn] is the initial function. The time-varying delay function h(t) satisfies the following conditions:

      0h1h(t)h2,h1h2, (2.2)

    where h1,h2 are the known real constant scalars.

    The neuron activation functions satisfy the following condition:

    Assumption 1. The neuron activation function f() is continuous and bounded which satisfies:

    kifi(θ1)fi(θ2)θ1θ2k+i,    θ1,θ2R,  θ1θ2,  i=1,2,...,n, (2.3)

    when θ2=0, Eq (2.3) can be rewritten as the following condition:

    kifi(θ1)θ1k+i, (2.4)

    where f(0)=0 and ki,k+i are given constants.

    From (2.3) and (2.4), for i=1,2,...,n, it follows that

    [fi(θ1)fi(θ2)ki(θ1θ2)][k+i(θ1θ2)fi(θ1)+fi(θ2)]0, (2.5)
    [fi(θ1)kiθ1][k+iθ1fi(θ1)]0. (2.6)

    Based on Assumption 1, there exists an equilibrium point x=[x1(t),x2(t),...,xn(t)]T of neural network (2.1).

    To prove the main results, the following Definition, Proposition, Corollary and Lemmas are useful.

    Definition 1. [34] Given a positive matrix M and positive constants k1,k2,Tf with k1<k2, the time-delay system described by (2.1) and delay condition as in (2.2) is said to be finite-time stable regarding to (k1,k2,Tf,h1,h2,M), if the state variables satisfy the following relationship:

    suph2s0{zT(s)Mz(s),˙zT(s)M˙z(s)}k1zT(t)Mz(t)<k2,  t[0,Tf].

    Proposition 2. [34] For any positive definite matrix Q, any differential function z:[bdL,bdU]Rn. Then, the following inequality holds:

    6bdULbdUbdL˙zT(s)Q˙z(s)dsˉζT[22Q10Q32Q16Q26Q58Q]ˉζ,

    where ˉζT=[z(bdU)  z(bdL)  1bdULbdUbdLz(s)ds] and bdUL=bdUbdL.

    Lemma 3. [40] (Schur complement) Given constant symmetric matrices X,Y,Z satisfying X=XT and Y=YT>0, then X+ZTY1Z<0 if and only if

    [XZTZY]<0,or[YZZTX]<0.

    Corollary 4. [39] For a given symmetric matrix Q>0, any vector ν0 and matrices J1,J2,J3,J4 with proper dimensions and any continuously differentiable function z:[bdL,bdU]Rn, the following inequality holds:

    bdUbdLbdUδ˙zT(s)Q˙z(s)dsdδνT0(2J1Q1JT1+4J2Q1JT2)ν0+2νT0(2J1γ1+4J2γ2),bdUbdLδbdL˙zT(s)Q˙z(s)dsdδνT0(2J3Q1JT3+4J4Q1JT4)ν0+2νT0(2J3γ3+4J4γ4),

    where bdUL=bdUbdL,

    γ1=z(bdU)1bdULbdUbdLz(s)ds,  γ2=z(bdU)+2bdULbdUbdLz(s)ds6(bdUL)2bdUbdLbdUδz(s)dsdδ,γ3=1bdULbdUbdLz(s)dsz(bdL),  γ4=z(bdL)4bdULbdUbdLz(s)ds+6(bdUL)2bdUbdLbdUδz(s)dsdδ.

    Lemma 5. [39] For any matrix Q>0 and differentiable function z:[bdL,bdU]Rn, such that the integrals are determined as follows:

    bdULbdUbdL˙zT(s)Q˙z(s)dsκT1Qκ1+3κT2Qκ2+5κT3Qκ3,

    where κ1=z(bdU)z(bdL),  κ2=z(bdU)+z(bdL)2bdULbdUbdLz(s)ds,

    κ3=z(bdU)z(bdL)+6bdULbdUbdLz(s)ds12(bdUL)2bdUbdLbdUδz(s)dsdδ and bdUL=bdUbdL.

    Lemma 6. [41] For any positive definite symmetric constant matrix Q and scalar τ>0, such that the following integrals are determined, it has

    0τtt+δzT(s)Qz(s)dsdδ2τ2(0τtt+δz(s)dsdδ)TQ(0τtt+δz(s)dsdδ).

    Let h1,h2 and α be constants,

    h21=h2h1,  ht1=h(t)h1,  h2t=h2h(t),

    N1=1eαh1α,  N2=1eαh2α,  N3=1(1+αh1)eαh1α2,  N4=(1+αh1)eαh1(1+αh2)eαh2α2,   N5=1(1+αh2)eαh2α2,

      N6=3+2αh2+4eαh2e2αh24α3,  N7=3+2αh1+4eαh1e2αh14α3,  N8=32(2+αh1)eαh1+e2αh14α3,

      N9=4(αh211)eαh1(2αh211)e2αh1+4eαh2e2αh24α3,  N10=4eαh1e2αh14eαh2+(12αh21)e2αh24α3,

    I=M12M12=M12M12,  ˉPi=M12PiM12,  i=1,2,3,...,6,  ˉQj=M12QjM12,  j=1,2,

    ˉRk=M12RkM12,  k=1,2,3,  ˉS=M12SM12,  ˉTl=M12TlM12,  l=1,2,3,4,

      M=λmin{¯Pi},  i=1,2,3,...,6,

    N=λmax{¯P1}+2λmax{¯P2}+λmax{¯P3}+2λmax{¯P4}+2λmax{¯P5}+λmax{¯P6}

      +N1λmax{¯Q1}+N2λmax{¯Q2}+h1N3λmax{¯R1}+h21N4λmax{¯R2}+h2N5λmax{¯R3}

      +N6λmax{ˉS}+2λmax{L1}+2λmax{L2}+2λmax{G1}+2λmax{G2}

      +N7λmax{¯T1}+N8λmax{¯T2}+N9λmax{¯T3}+N10λmax{¯T4},

    L1=ni=1λ1i,  L2=ni=1λ2i,  G1=ni=1γ1i,  G2=ni=1γ2i.

    The notations for some matrices are defined as follows:

    f(t)=f(Wx(t)) and gh(t)=g(Wx(th(t))),

    W1(t)=1h1tth1x(s)ds,  W2(t)=1ht1th1th(t)x(s)ds,  W3(t)=1h2tth(t)th2x(s)ds,

    W4(t)=1h2tth2x(s)ds,  W5(t)=1h20h2tt+sx(δ)dsdδ,  W6(t)=1h21tth1tτx(s)dsdτ,

    W7(t)=1h2t1th1th(t)th1τx(s)dsdτ,  W8(t)=1h22tth(t)th2th(t)τx(s)dsdτ,

    ϖ1(t)=[xT(t)  xT(th1)  xT(th(t))  xT(th2)  fT(t)  gTh(t)]T,

    ϖ2(t)=[WT1(t)  WT2(t)  WT3(t)  WT4(t)  WT5(t)  ˙xT(t)  WT6(t)  WT7(t)  WT8(t)]T,

    ϖ=[ϖT1(t)  ϖT2(t)]T,

    D1=diag{k+11,k+21,...,k+n1},D2=diag{k+12,k+22,...,k+n2} and D=max{D1,D2},

    E1=diag{k11,k21,...,kn1},E2=diag{k12,k22,...,kn2} and E=max{E1,E2},

    ζ1(t)=[xT(t)  xT(th1)  WT1(t)],  ζ2(t)=[xT(th1)  xT(th(t))  WT2(t)],

    ζ3(t)=[xT(th(t))  xT(th2)  WT3(t)],  ζ4(t)=[xT(t)  xT(th2)  WT4(t)],

    G1=x(t)W1(t),  G2=x(t)+2W1(t)6W6(t),

    G3=W1(t)x(th1),  G4=x(th1)4W1(t)+6W6(t),

    G5=x(th1)W2(t),  G6=x(th1)+2W2(t)6W7(t),

    G7=x(th(t))W3(t),  G8=x(th(t))+2W3(t)6W8(t),

    G9=W2(t)x(th(t)),  G10=x(th(t))4W2(t)+6W7(t),

    G11=W3(t)x(th2),  G12=x(th2)4W3(t)+6W8(t).

    Let us consider a LKF for stability criterion for network (2.1) as the following equation:

    V(t,xt)=10i=1Vi(t,xt), (3.1)

    where

    V1(t,xt)=xT(t)P1x(t)+2xT(t)P2tth2x(s)ds+(tth2x(s)ds)TP3tth2x(s)ds+2xT(t)P40h2tt+sx(δ)dδds+2(tth2x(s)ds)TP5×0h2tt+sx(δ)dδds+(0h2tt+sx(δ)dδds)TP60h2tt+sx(δ)dδds,V2(t,xt)=tth1eα(st)xT(s)Q1x(s)ds,V3(t,xt)=tth2eα(st)xT(s)Q2x(s)ds,V4(t,xt)=h10h1tt+seα(st)˙xT(δ)R1˙x(δ)dδds,V5(t,xt)=h21h1h2tt+seα(st)˙xT(δ)R2˙x(δ)dδds,V6(t,xt)=h20h2tt+seα(st)˙xT(δ)R3˙x(δ)dδds,V7(t,xt)=0h20τtt+seα(δ+st)˙xT(δ)S˙x(δ)dδdsdτ,V8(t,xt)=2eαtni=1Wix0[λ1i(σ+isfi(s))+λ2i(fi(s)σis)]ds,V9(t,xt)=2eαtni=1Wix0[γ1i(η+isgi(s))+γ2i(gi(s)ηis)]ds,V10(t,xt)=0h10τtt+seα(δ+st)˙xT(δ)T1˙x(δ)dδdsdτ+0h1τh1tt+seα(δ+st)˙xT(δ)T2˙x(δ)dδdsdτ+h1h2h1τtt+seα(δ+st)˙xT(δ)T3˙x(δ)dδdsdτ+h1h2τh2tt+seα(δ+st)˙xT(δ)T4˙x(δ)dδdsdτ.

    Next, we will show that the LKF (3.1) is positive definite as follows:

    Proposition 7. Consider an α>0. The LKF (3.1) is positive definite, if there exist matrices Qi>0,(i=1,2), Rj>0,(j=1,2,3), Tk>0,(k=1,2,3,4), S>0 and any matrices P1=PT1, P3=PT3, P6=PT6, P2, P4, P5, such that the following LMI holds:

    H=[H11H12H13H22P5H33]>0, (3.2)

    where

    H11=P1+h2e2αh2R3+0.5h2e2αh2S,H12=P2e2αh2R3,  H13=P4h12e2αh2S,H22=P3+h12e2αh2(R3+Q2),  H33=P6+h32e2αh2(S+ST).

    Proof. We let z1(t)=h2W4(t), z2(t)=h2W5(t), then

    V1(t,xt)=xT(t)P1x(t)+2xT(t)P2z1(t)+zT1(t)P3z1(t)+2xT(t)P4z2(t)+2zT1(t)P5z2(t)+zT2(t)P6z2(t),V3(t,xt)e2αh2tth2xT(s)Q2x(s)ds=h12e2αh2zT1(t)Q2z1(t),V6(t,xt)h2e2αh20h2tt+s˙xT(δ)R3˙x(δ)dδdsh2e2αh20h2s1(tt+s˙x(δ)dδ)TR3(tt+s˙x(δ)dδ)dseαh20h2[x(t)x(t+s)]TR3[x(t)x(t+s)]ds=[x(t)z1(t)]T[h2eαh2R3eαh2R3h12eαh2R3][x(t)z1(t)],V7(t,xt)eαh20h20τtt+s˙xT(δ)S˙x(δ)dδdsdτeαh20h20τs1(tt+s˙x(δ)dδ)TS(tt+s˙x(δ)dδ)dsdτh12eαh20h20τ[x(t)x(t+s)]TS[x(t)x(t+s)]dsdτ=[x(t)z2(t)]T[0.5h2e2αh2Sh12e2αh2Sh32e2αh2(S+ST)][x(t)z2(t)].

    Combining with V2(t,xt), V4(t,xt), V5(t,xt), V8(t,xt)V10(t,xt), it follows that if the LMIs (3.2) holds, the LKF (3.1) is positive definite.

    Remark 8. It is worth noting that most of previous paper[1,2,3,4,5,6,7,15,20], the Lyapunov martices P1, P3 and P6 must be positive definite. In our work, we remove this restriction by utilizing the technique of constructing complicated Lyapunov V1(t,xt), V3(t,xt), V6(t,xt) and V7(t,xt) as shown in the proof of Proposition 7, therefore, P1, P3 and P6 are only real matrices. We can see that our work are less conservative and more applicable than aforementioned works.

    Theorem 9. Given a positive matrix M>0, the time-delay system described by (2.1) and delay condition as in (2.2) is said finite-time stable with respect to (k1,k2,Tf,h1,h2,M), if there exist symmetric positive definite matrices Qi>0, (i=1,2), Rj>0 (j=1,2,3), Tk>0 (k=1,2,3,4), Kl>0 (l=1,2,3,...,10), diagonal matrices S>0,  Hm>0,  m=1,2,3, and matrices P1=PT1, P3=PT3, P6=PT6, P2, P4, P5 such that the following LMIs hold:

    H=[H11H12H13H22P5H33]>0, (3.3)
    Ω1=[Ω1,1Ω1,2Ω2,2]<0, (3.4)
    Ω1,1=[Π1,1Π1,2Π1,3Π1,4Π1,5Π1,6Π1,70Π2,2Π2,3000Π2,7Π2,8Π3,3Π3,4Π3,5Π3,60Π3,8Π4,40000Π5,5Π5,600Π6,600Π7,70Π8,8]<0, (3.5)
    Ω1,2=[0Ξ1,2Ξ1,3Ξ1,400000000Ξ2,60Ξ3,100Ξ3,40Ξ3,6Ξ3,7Ξ4,1Ξ4,2Ξ4,3000Ξ4,7000Ξ5,4000000Π6,4000000000000000Ξ8,60]<0, (3.6)
    Ω2,2=[Σ1,100000Σ1,70Σ2,2Σ2,3Σ2,400000Σ3,3Ξ3,4000000Σ4,40000000Σ5,50000000Σ6,60000000Σ7,7]<0, (3.7)
    Ω2=diag{χi}<0,          (3.8)

    where i=1,2,3,...,12,  b1=16,  b2=1ht1,  b3=1h2t,

    χ1=2e2αh1T1,   χ2=4e2αh1T1,   χ3=2e2αh1T2,   χ4=4e2αh1T2,   χ5=χ7=2e2αh2T3,   χ6=χ8=4e2αh2T3,   χ9=χ11=2e2αh2T4,   χ10=χ12=4e2αh2T4,

    and

    Nk1Mk2eαTf, (3.9)
    H11=P1+h2e2αh2R3+0.5h2e2αh2S,H12=P2e2αh2R3,  H13=P4h12e2αh2S,H22=P3+h12e2αh2(R3+Q2),  H33=P6+h32e2αh2(S+ST),Π1,1=P1AATP1+2P2+2h2P4+Q1+Q222eαh1R1b122eαh2R3b12e2αh2S2QA2WTET1HT1D1W2WTETH3DWαP1+4K1e2αh1+8K2e2αh1,Π1,2=10eαh1R1b1,  Π1,3=WTETHT3DW+WTDTHT3EW,Π1,4=P210eαh2R3b1,Π1,5=P1B+QB+WTDT1HT1+WTET1H1+WTDTHT3+WTETH3,Π1,6=P1C+QCWTDTHT3WTETH3,  Π1,7=32eαh1R1b1,Π2,2=eαh1Q116eαh1R1b122eαh2R2b14K3e2αh1+8K4e2αh19h2e2αh2T3b2+4K5e2αh2+8K6e2αh2,Π2,3=10eαh2R2b1+3h2te2αh2T3b2,  Π2,7=26eαh1R1b1,Π2,8=32eαh2R2b124h2te2αh2T3b2,Π3,3=16eαh2R2b122eαh2R2b12WTET2H2D2W2WTETH3DW9h2te2αh2T3b2+4K7e2αh2+8K8e2αh29ht1e2αh2T4b34K9e2αh2+8K10e2αh2,Π3,4=10eαh2R2b1+3ht1e2αh2T4b3,  Π3,5=WTDTHT3WTETH3,Π3,6=WTDT2HT2+WTET2H2+WTDTHT3+WTETH3,Π3,8=26eαh2R2b1+36h2te2αh2T3b3,Π4,4=eαh2Q216eαh2R2b116eαh2R3b19ht1e2αh2T4b34K11e2αh2+8K12e2αh2,Π5,5=2H12H3,  Π6,6=2H22H3,Π7,7=58eαh1R1b14K1e2αh1+16K2e2αh1+4K3e2αh132K4e2αh1,Π8,8=58eαh2R2b1192h2te2αh2T3b24K5e2αh2+16K6e2αh2+4K9e2αh232K10e2αh2,Ξ1,2=h2P3h2P4+h22PT5+32eαh2R3b1+e2αh2Sαh2P2,Ξ1,3=h2P5+h22P6αh2P4,  Ξ1,4=WTDT1L1WWTET1L2WQATQT,Ξ2,6=60h2te2αh2T3b3,  Ξ3,1=32eαh2R2b124e2αh2T4,Ξ3,4=WTDT2G1WWTET2G2W,  Ξ3,6=60h2te2αh2T3b2,  Ξ3,7=60ht1e2αh2T4b3,Ξ4,1=26e2αh2R2b1+36ht1e2αh2T4b3,  Ξ4,2=h2P3+26e2αh2R3b1,Ξ4,3=h2P5,  Ξ4,7=60ht1e2αh2T4b3,  Ξ5,4=L1W+L2W+BTQT,Ξ6,4=G1W+G2W+CTQT,  Ξ8,6=360h2te2αh2T3b2,Σ1,1=58e2αh2R2b14K7e2αh2+16K8e2αh2192h1te2αh2T4b3+4K11e2αh232K12e2αh2,Σ1,7=360h1e2αh2T4b3,  Σ2,2=h22P558eαh2R3b12e2αh2Sαh22P3,Σ2,3=h22P6αh22P5,  Σ2,4=h2P2,  Σ3,3=αh22P6,  Σ3,4=h2P4,Σ4,4=h21R1+h221R2+h22R3+3h22Sb12Q+3h21(T1+T2)b1+3h221(T3+T4)b1,Σ5,5=48K2e2αh1+48K4e2αh1,Σ6,6=720h2te2αh2T3b248K6e2αh2+48K10e2αh2,Σ7,7=48K8e2αh2720ht1e2αh2T4b3+48K12e2αh2.

    Proof. Let us choose the LKF defined as in (3.1). By Proposition 7, it is easy to check that

    Mx(t)2V(t,xt),  t0  and  V(0,x0)Nϕ(t)2.

    Taking the derivative of Vi(t,xt),i=1,2,3,...,10 along the solution of the network (2.1), we get

    ˙V1(t,xt)=2xT(t)AP1x(t)+2xT(t)P1Bf(t)+2xT(t)P1Cgh(t)+2xT(t)P2[x(t)x(th2)]+2h2WT4(t)P2˙x(t)+2h2[x(t)x(th2)]TP3W4(t)+2h2xT(t)P4[x(t)W4(t)] (3.10)
    +2h2WT5(t)P4˙x(t)+2h22WT4(t)P5[x(t)W4(t)]+2h2[x(t)x(th2)]TP5W5(t)+2h22[x(t)W4(t)]TP6W5(t),˙V2(t,xt)=xT(t)Q1x(t)eαh1xT(th1)Q1x(th1)αV2(t,xt),˙V3(t,xt)=xT(t)Q2x(t)eαh2xT(th2)Q2x(th2)αV3(t,xt),˙V4(t,xt)h21˙xT(t)R1˙x(t)h1eαh1tth1˙xT(s)R1˙x(s)dsαV4(t,xt),˙V5(t,xt)h221˙xT(t)R2˙x(t)h21eαh2th1th2˙xT(s)R2˙x(s)dsαV5(t,xt),˙V6(t,xt)h22˙xT(t)R3˙x(t)h2eαh2tth2˙xT(s)R3˙x(s)dsαV6(t,xt),˙V7(t,xt)h22˙xT(t)S˙x(t)e2αh20h2tt+τ˙xT(s)S˙x(s)dsdταV7(t,xt), (3.11)
    ˙V8(t,xt)2[L1(D1WxT(t)f(WxT(t)))+L2(f(WxT(t)))E1WxT(t)]W˙x(t)αV8(t,xt),˙V9(t,xt)2[G1(D2WxT(th(t))g(WxT(th(t))))+2G2(g(WxT(th(t))))E2WxT(th(t))]W˙x(t)αV9(t,xt),˙V10(t,xt)=h212˙xT(t)[T1+T2]˙x(t)+h2212˙xT(t)[T3+T4]˙x(t)e2αh1tth1tτ˙xT(s)T1˙x(s)dsdτe2αh1tth1τth1˙xT(s)T2˙x(s)dsdτe2αh2th1th2th1τ˙xT(s)T3˙x(s)dsdτe2αh2th1th2τth2˙xT(s)T4˙x(s)dsdταV10(t,xt).

    Define

    χi=[22Ri10Ri32Ri16Ri26Ri  58Ri],  i=1,2,3,4.

    Applying Proposition 2, we obtain

    h1eαh1tth1˙xT(s)R1˙x(s)dseαh16ζT1(t)χ1ζ1(t), (3.12)
    h21eαh2th2th1˙xT(s)R2˙x(s)dseαh26ζT2(t)χ2ζ2(t)eαh26ζT3(t)χ3ζ3(t), (3.13)
    h2eαh2tth2˙xT(s)R3˙x(s)dseαh26ζT4(t)χ4ζ4(t). (3.14)

    Applying Lemma 6, this leads to

    eαh20h2tt+τ˙xT(s)S˙x(s)dsdτ2h22e2αh2[x(t)W4(t)]TS[x(t)W4(t)].

    From Corollary 4, we have

    eαh1tth1tτ˙xT(s)T1˙x(s)dsdτ2eαh1ϖT(t)[K1T11KT1+2K2T11KT2+2K1G1+4K2G2]ϖ(t),eαh1τth1tth1˙xT(s)T2˙x(s)dsdτ2eαh1ϖT(t)[K3T12KT3+2K4T12KT4+2K3G3+4K4G4]ϖ(t),eαh2th1th2th1τ˙xT(s)T3˙x(s)dsdτh2teαh2th1th(t)˙xT(s)T3˙x(s)ds+2eαh2ϖT(t)[K5T13KT5+2K6T13KT6+2K5G5+4K6G6]ϖ(t)+2eαh2ϖT(t)[K7T13KT7+2K8T13KT8+2K7G7+4K8G8]ϖ(t),eαh2th1th2τth2˙xT(s)T4˙x(s)dsdτht1eαh2th(t)th2˙xT(s)T4˙x(s)ds+2eαh2ϖT(t)[K9T14KT9+2K10T14KT10+2K9G9+4K10G10]ϖ(t)+2eαh2ϖT(t)[K11T14KT11+2K12T14×KT12+2K11G11+4K12G12]ϖ(t). (3.15)

    By Lemma 5, we obtain

    h2teαh2th1th(t)˙xT(s)T3˙x(s)dsht1eαh2th1th(t)˙xT(s)T4˙x(s)dsh2tht1eαh2([(x(th1)x(th(t))]TT3[x(th1)x(th(t))]+3[x(th1)+x(th(t))2W2(t)]TT3[x(th1)+x(th(t))2W2(t)]+5[x(th1)x(th(t))+6W2(t)12W7(t)]TT3×[x(th1)x(th(t))+6W2(t)12W7(t)])ht1h2teαh2([x(th(t))x(th2))]TT4[x(th(t))x(th2)]+3[x(th(t))+x(th2)2W3(t)]TT4[x(th(t))+x(th2)2W3(t)]+5[x(th(t))x(th2)+6W3(t)12W8(t)]TT4×[x(th(t))x(th2)+6W3(t)12W8(t)]). (3.16)

    Taking the assumption of activation functions (2.5) and (2.6) for any diagonal matrices H1,H2,H3>0, it follows that

    2[f(t)E1Wx(t)]TH1[D1Wx(t)f(t)]0,2[gh(t)E2Wx(th(t))]TH2[D2Wx(th(t))gh(t)]0,2[f(t)gh(t)E(Wx(t)Wx(th(t)))]T×H3[D(Wx(t)Wx(th(t)))f(t)+gh(t)]0. (3.17)

    Multiply (2.1) by (2Qx(t)+2Q˙x(t))T, we have the following identity:

    2xT(t)Q˙x(t)2xT(t)QAx(t)+2xT(t)QBf(t)+2xT(t)QCgh(t)       2˙x(t)Q˙x(t)2˙x(t)QAx(t)+2˙x(t)QBf(t)+2˙x(t)QCgh(t)=0. (3.18)

    From (3.10)-(3.18), it can be obtained

    ˙V(t,xt)+αV(t,xt)ϖT(t)[Ω1+Ω2]ϖ(t),

    where Ω1 and Ω2 are given in Eqs (3.4) and (3.8). Since Ω1<0 and Ω2<0, ˙V(t,xt)+αV(t,xt)0, then, we have

    ˙V(t,xt)αV(t,xt),t0. (3.19)

    Integrating both sides of (3.19) from 0 to t with t[0,Tf], we obtain

    V(t,xt)V(0,x0)e2αt,t0.

    with

    V1(0,x0)=xT(0)P1x(0)+2h2xT(0)P2WT4(0)+h22W4(0)P3W4(0)+2h2xT(0)P4W5(0)+2h22WT4(0)P5W5(0)+h22P5WT5(0)P6W5(0),V2(0,x0)=0h1eαsxT(s)Q1x(s)ds,V3(0,x0)=0h2eαsxT(s)Q2x(s)ds,V4(0,x0)=h10h10seαs˙xT(δ)R1˙x(δ)dδds,V5(0,x0)=h21h1h20seαs˙xT(δ)R2˙x(δ)dδds,V6(0,x0)=h20h20seαs˙xT(δ)R3˙x(δ)dδds,V7(0,x0)=0h20τ0seα(δ+s)˙xT(δ)S˙x(δ)dδdsdτ,V8(0,x0)=2ni=1Wix0[λ1i(σ+isfi(s))+λ2i(fi(s)σis)]ds,V9(0,x0)=2ni=1Wix0[γ1i(η+isgi(s))+γ2i(gi(s)ηis)]ds,V10(0,x0)=0h10τ0seα(δ+s)˙xT(δ)T1˙x(δ)dδdsdτ+0h1τh10seα(δ+s)˙xT(δ)T2˙x(δ)dδdsdτ+h1h2h1τ0seα(δ+s)˙xT(δ)T3˙x(δ)dδdsdτ+h1h2τh2s0eα(δ+s)˙xT(δ)T4˙x(δ)dδdsdτ.

    Let I=M12M12=M12M12,  ˉPi=M12PiM12,  i=1,2,3,...,6,

    ˉQj=M12QjM12,  j=1,2,  ˉRk=M12RkM12,  k=1,2,3,  ˉTl=M12TlM12,l=1,2,3,4. Therefore,

    V(0,x0)=xT(0)M12ˉP1M12x(0)+2h2xT(0)M12ˉP2M12W4(0)+h22W4(0)M12ˉP3M12W4(0)+2h2xT(0)M12ˉP4M12W5(0)+2h22WT4(0)M12ˉP5M12W5(0)+h22P5WT5(0)M12ˉP6M12W5(0)+0h1eαsxT(s)M12ˉQ1M12x(s)ds+0h2eαsxT(s)M12ˉQ2M12x(s)ds+h10h10seαs˙xT(δ)M12ˉR1M12˙x(δ)dδds+h21h1h20seαs˙xT(δ)M12ˉR2M12˙x(δ)dδds+h20h20seαs˙xT(δ)M12ˉR3M12˙x(δ)dδds+0h20τ0seα(δ+s)˙xT(δ)M12ˉSM12˙x(δ)dδdsdτ+2[L1(D1WxT(0)f(WxT(0)))+L2(f(WxT(0))E1WxT(0)]+2[G1(D2WxT(0)g(WxT(0)))+G2(g(WxT(0))E2WxT(0)]+0h10τ0seα(δ+s)˙xT(δ)M12ˉT1M12˙x(δ)dδdsdτ+0h1τh10seα(δ+s)˙xT(δ)M12ˉT2M12˙x(δ)dδdsdτ+h1h2h1τ0seα(δ+s)˙xT(δ)M12ˉT3M12˙x(δ)dδdsdτ+h1h2τh2s0eα(δ+s)˙xT(δ)M12ˉT4M12˙x(δ)dδdsdτ,k1[λmax{¯P1}+2λmax{¯P2}+λmax{¯P3}+2λmax{¯P4}+2λmax{¯P5}+λmax{¯P6}+N1λmax{¯Q1}+N2λmax{¯Q2}+h1N3λmax{¯R1}+h21N4λmax{¯R2}+h2N5λmax{¯R3}+N6λmax{ˉS}+2λmax{L1}+2λmax{L2}+2λmax{G1}+2λmax{G2}+N7λmax{¯T1}+N8λmax{¯T2}+N9λmax{¯T3}+N10λmax{¯T4}].

    Since V(t,xt)V1(t,xt), we have

    V(t,xt)xT(t)ˉP1Mx(t)+2h2xT(t)ˉP2MW4(t)+h22WT4(t)ˉP3MW4(t)+2h2xT(t)ˉP4MW5(t)+2h22WT4(t)ˉP5MWT5(t)+h22WT5(t)ˉP6MW5(t),λmin(¯Pi)xT(t)Mx(t),  i=1,2,3,4,5,6.

    For any t[0,Tf], it follows that,

    xT(t)Mx(t)k1eαTfλmin(ˉPi)[λmax{¯P1}+2λmax{¯P2}+λmax{¯P3}+2λmax{¯P4}+2λmax{¯P5}+λmax{¯P6}+N1λmax{¯Q1}+N2λmax{¯Q2}+h1N3λmax{¯R1}+h21N4λmax{¯R2}+h2N5λmax{¯R3}+N6λmax{ˉS}+2λmax{L1}+2λmax{L2}+2λmax{G1}+2λmax{G2}+N7λmax{¯T1}+N8λmax{¯T2}+N9λmax{¯T3}+N10λmax{¯T4}]<k2.

    This shows that the condition (3.9) holds. Therefore, the delayed neural network described by (2.1) and delay condition as in (2.2) is said finite-time stable with respect to (k1,k2,Tf,h1,h2,M).

    Remark 10. The condition (3.9) is not standard form of LMIs. To verify that this condition is equivalent to the relation of LMI, it needs to apply Schur's complement lemma in Lemma 1 and let Bi, i=1,2,3,...,21 be some positive scalars with

    B1=λmin{¯Pi},  i=1,2,3,...,6,B2=λmax{¯P1},  B3=λmax{¯P2},  B4=λmax{¯P3},  B5=λmax{¯P4},B6=λmax{¯P5},  B7=λmax{¯P6},  B8=λmax{¯Q1},  B9=λmax{¯Q2},B10=λmax{¯R1},  B11=λmax{¯R2},  B12=λmax{¯R3},  B13=λmax{ˉS},B14=λmax{L1},  B15=λmax{L2},  B16=λmax{G1},  B17=λmax{G2},B18=λmax{¯T1},  B19=λmax{¯T2},  B20=λmax{¯T3},  B21=λmax{¯T4}.

    Let us define the following condition

    k1[B2+2B3+B4+2B5+2B6+B7+N1B8+N2B9+h1N3B10                +h21N4B11+h2N5B12+N6B13+2B14+2B15+2B16+2B17              +N7B18+N8B19+N9B20+N10B21]<k2B1eαTf.

    It follows that condition (3.9) is equivalent to the relations and LMIs as follows:

    B1I<¯P1<B2I,  0<¯P2<B3I,  0<¯P3<B4I,  0<¯P4<B5I,0<¯P5<B6I,  0<¯P6<B7I,  0<¯Q1<B8I,  0<¯Q2<B9I,  0<¯R1<B10I,  0<¯R2<B11I,  0<¯R3<B12I,  0<ˉS<B13I,0<L1<B14I,  0<L2<B15I,  0<G1<B16I,  0<G2<B17I,0<¯T1<B18I,  0<¯T2<B19I,  0<¯T3<B20I,  0<¯T4<B21I, (3.20)
    1=[1,11,21,32,203,3]<0, (3.21)
    1,1=[ψ1,1ψ1,2ψ1,3ψ1,4ψ1,5ψ1,6ψ1,7B200000B30000B4000B500B60B7], (3.22)
    1,2=[ψ1,8ψ1,9ψ1,10ψ1,11ψ1,12ψ1,13ψ1,14000000000000000000000], (3.23)
    1,3=[ψ1,15ψ1,16ψ1,17ψ1,18ψ1,19ψ1,20ψ1,21000000000000000000000], (3.24)
    2,2=[B8000000B900000B100000B11000B1200B130B14], (3.25)
    3,3=[B15000000B1600000B170000B18000B1900B200B21], (3.26)

    where IRn×n is an identity matrix, ψ1,1=B1k2eαTf, ψ1,2=B2k1, ψ1,3=B32k1, ψ1,4=B4k1, ψ1,5=B52k1, ψ1,6=B62k1, ψ1,7=B7k1, ψ1,8=B8k1N1, ψ1,9=B9k1N2, ψ1,10=B10k1h1N3, ψ1,11=B11k1h21N4, ψ1,12=B12k1h2N5, ψ1,13=B13k1N6, ψ1,14=B142k1, ψ1,15=B152k1, ψ1,16=B162k1, ψ1,17=B172k1, ψ1,18=B18k1N7, ψ1,19=B19k1N8, ψ1,20=B20k1N9, ψ1,21=B21k1N10.

    Corollary 11. Given a positive matrix M>0, the time-delay system described by (2.1) and delay condition as in (2.2) is said finite-time stable with respect to (k1,k2,Tf,h1,h2,M), if there exist symmetric positive definite matrices Qi>0, (i=1,2), Rj>0 (j=1,2,3), Tk>0 (k=1,2,3,4), Kl>0 (l=1,2,3,...,10), diagonal matrices S>0,  Hm>0,  m=1,2,3, and matrices P1=PT1, P3=PT3, P6=PT6, P2, P4, P5 and positive scalars α,  Bi,  1,2,3,...,21 such that LMIs and inequalities (3.3)-(3.8), (3.20)-(3.26).

    Remark 12. If the delayed NNs as in (2.1) are choosing as B=W0,C=W1,W=W2, then the system turns into the delayed NNs proposed in [23],

    ˙x(t)=Ax(t)+W0f(W2x(t))+W1g(W2x(th(t))), (3.27)

    where 0h(t)hM and ˙h(t)hD, it follows that (3.28) is the special case of the delayed NNs in (2.1).

    Remark 13. Replacing W0=B,W1=C,W2=W,d1(t)=d(t)=h(t) and d2(t)=0 and external constant input is equal to zero in Eq (1) of the delayed NNs as had been done in [16], we have

    ˙x(t)=Ax(t)+Bf(Wx(t))+Cg(Wx(th(t))), (3.28)

    then (3.28) is the same NNs as in (2.1) that (2.1) is the particular case of the delayed NNs in [16].

    Remark 14. If we choose B=0,C=1 and g=f and constant input is equal to zero in the delayed NNs in (2.1), then it can be rewritten as

    ˙x(t)=Ax(t)+f(Wx(th(t))), (3.29)

    then (3.29) is the special case of the NNs as in (2.1) which has been done in [2,3,4,5,6,10,12,13,20].

    Remark 15. If we set B=W0,C=W1 and W=1 and constant input is equal to zero in the delayed NNs in (2.1), then (2.1) turns into

    ˙x(t)=Ax(t)+W0f(x(t))+W1fgx(th(t))), (3.30)

    then (3.30) is the special case of the NNs as in (2.1) which has been done in [8,11,24,28]. Similarly, if we rearrange the matrices in the delayed NNs in (2.1) and set W=1, it shows that it is the same delayed NNs proposed in [9,19,22].

    Remark 16. The time delay in this work is defined as a continuous function serving on to a given interval that the lower and upper bounds for the time-varying delay exist and the time delay function is not necessary to be differentiable. In some proposed researches, the time delay function needs to be differentiable which are reported in [2,3,4,5,6,8,9,10,11,12,13,15,16,17,19,20,22,23,24,28].

    In this section, we provide numerical examples with their simulations to demonstrate the effectiveness of our results.

    Example 17. Consider the neural networks (2.1) with parameters as follows:

    A=diag{7.3458,6.9987,5.5949},B=diag{0,0,0},C=diag{1,1,1},W=[13.60142.96160.69387.473621.68103.21000.72902.633420.1300].

    The activation function satisfies Eq (2.3) with

    E1=E2=E=diag{0,0,0},D1=D2=D=diag{0.3680,0.1795,0.2876}.

    By applying Matlab LMIs Toolbox to solve the LMIs in (3.4)-(3.8), we can conclude that the upper bound of hmax without nondifferentiable μ of NNs in (2.1) which is shown in Table 1 is to compare the results of this paper with the proposed results in [1,2,3,4,5,6,7,15,20]. The upper bounds received in this work are larger than the corresponding ones. Note that the symbol ‘-’ represents the upper bounds which are not provided in those literatures and this paper.

    Table 1.  Upper bounds of time delay h for various values of μ.
    hmax Method μ=0.1 μ=0.3 μ=0.5 μ=0.9 unknown μ
    0.1 [1] 0.8411 0.5496 0.4267 0.3227 -
    [2] 0.9282 0.5891 - 0.3399 -
    [3] 0.9985 0.6062 - 0.3905 -
    [4] 1.1243 0.6768 0.5168 0.4487 -
    [5] 1.1278 0.6860 0.5325 0.4602 -
    Thm 1 [6] 1.2080 0.6744 0.5149 0.4482 -
    Prop. 2 [6] 1.2198 0.6771 0.5218 0.4601 -
    Thm 2 [6] 1.3282 0.7547 0.6341 0.5245 -
    [15] 0.9291 0.5916 - 0.3413 0.3413
    [20] 1.1732 0.6848 - 0.4526 0.4526
    This paper - - - - 2.4989
    0.5 [2] 1.0497 0.6021 - 0.6021 -
    [7] 1.1313 0.6509 - - -
    [4] 1.1366 0.6896 0.6243 0.6186 -
    [5] 1.1423 0.7206 0.6382 0.6219 -
    Thm 1 [6] 0.2106 0.6727 0.5657 0.4360 -
    Prop. 2 [6] 1.2327 0.6807 0.5766 0.4864 -
    Thm 2 [6] 1.3417 0.7744 0.6635 0.6221 -
    [15] 1.0521 0.6053 - 0.6053 0.6053
    [20] 1.3046 0.7738 - 0.7704 0.7704
    This paper - - - - 2.4997

     | Show Table
    DownLoad: CSV

    The numerical simulation of finite-time stability for delayed neural network (2.1) with time-varying delay h(t)=0.6+0.5|sint|, the initial condition ϕ(t)=[0.8,0.3,0.8], we have xT(0)Mx(0)=1.37, where M=I then we choose k1=1.4 and activation function g(x(t))=tanh(x(t)). The trajectories of x1(t),x2(t) and x3(t) of finite-time stability for this network is shown in Figure 1. Otherwise, Figure 2 shows the trajectories of xT(t)x(t) of finite-time stability for delayed neural network (2.1) with k2=1.575.

    Figure 1.  The trajectories of x1(t),x2(t) and x3(t) of finite-time stability for delayed neural network of Example 17.
    Figure 2.  The trajectories of xT(t)x(t) of finite-time stability for delayed neural network (2.1) with k2=1.575 of Example 17.

    Example 18. Consider the neural networks (2.1) with parameters as follows:

    A=diag{7.0214,7.4367},B=diag{0,0},C=diag{1,1},W=[6.499312.02750.68675.6614],

    The activation function satisfies Eq (2.3) with

    E1=E2=E=diag{0,0},D1=D2=D=diag{1,1}.

    As shown in Table 2, the results of the obtained as in [2,3,5,6,20] and this work, by using Matlab LMIs Toolbox, we can summarize that the upper bound of hmax is differentiable μ of NNs in (2.1). We can see that the upper bounds received in this paper are larger than the corresponding purposed. Similarly, the symbol ‘-’ represents the upper bounds which are not given in those proposed and this study.

    Table 2.  Upper bounds of time delay h for various values of μ.
    hmax Method μ=0.3 μ=0.5 μ=0.9 unknown μ
    0.1 [2] 0.4249 0.3014 0.2857 -
    [3] 0.4764 0.3635 0.3255 -
    [5] 0.5849 0.4433 0.3820 -
    Thm 1 [6] 0.5756 0.4312 0.3707 -
    Prop. 2 [6] 0.5783 0.4385 0.3860 -
    Thm 2 [6] 0.6444 0.5329 0.4383 -
    [20] 0.5123 0.4978 0.4625 0.4625
    This paper - - - 0.8999
    0.5 [2] 0.5147 0.4134 0.4134 -
    [3] 0.5335 0.4229 0.4228 -
    [5] 0.5992 0.4796 0.4373 -
    Thm 1 [6] 0.5760 0.4418 0.3922 -
    Prop. 2 [6] 0.5799 0.4583 0.4085 -
    Thm 2 [6] 0.6511 0.5408 0.4535 -
    [20] 0.6356 0.6356 0.6356 0.6356
    This paper - - - 0.8999

     | Show Table
    DownLoad: CSV

    The numerical simulation of finite-time stability for delayed neural network (2.1) with time-varying delay h(t)=0.6+0.5|sint|, the initial condition ϕ(t)=[0.4,0.5], we have xT(0)Mx(0)=0.41, where M=I then we choose k1=0.5 and activation function g(x(t))=tanh(x(t)). The trajectories of x1(t) and x2(t) of finite-time stability for this network is shown in Figure 3. Otherwise, Figure 4 shows the trajectories of xT(t)x(t) of finite-time stability for delayed neural network (2.1) with k2=0.85.

    Figure 3.  The trajectories of x1(t) and x2(t) of finite-time stability for delayed neural network of Example 18.
    Figure 4.  The trajectories of xT(t)x(t) of finite-time stability for delayed neural network (2.1) with k2=0.85 of Example 18.

    Example 19. Consider the neural networks (2.1) with parameters as follows:

    A=[1.71.701.310.70.710.6],B=[1.51.70.11.310.50.710.6],C=[0.50.70.10.30.10.50.70.50.6],W=I,

    and the activation function f(x(t))=g(x(t))=tanh(x(t)), the time-varying delay function satisfying h(t)=0.6+0.5|sint|. With an initial condition ϕ(t)=[0.4,0.2,0.4], the solution of the neural networks is shown in Figure 5. We can see that the trajectory of xT(t)Mx(t)=x(t)2 diverges as t is shown in Figure 6. To further investigate the maximum value of Tf that the finite-time stability of the neural networks (2.1) with respect to (0.6,k2,Tf,0.6,1.1,I). For fixed k2=500, by solving the LMIs in Theorem 0 and Corollary 11, we have the maximum value of Tf=8.395.

    Figure 5.  The trajectories of x1(t), x2(t) and x3(t) of finite-time stability for delayed neural network of Example 19.
    Figure 6.  The trajectories of xT(t)x(t) of finite-time stability for delayed neural network (2.1) with k2=500 and Tf=8.395 of Example 19.

    In this research, the finite-time stability criterion for neural networks with time-varying delays were proposed via a new argument based on the Lyapunov-Krasovskii functional (LKF) method was proposed with non-differentiable time-varying delay. The new LKF was improved by including triple integral terms consisting of improved functionality of finite-time stability, including integral inequality and implementing a positive diagonal matrix without a free weighting matrix. The improved finite-time sufficient conditions for the neural network with time varying delay were estimated in terms of linear matrix inequalities (LMIs) and the results were better than reported in previous research.

    The first author was supported by Faculty of Science and Engineering, and Research and Academic Service Division, Kasetsart University, Chalermprakiat Sakon Nakhon province campus. The second author was financially supported by the Thailand Research Fund (TRF), the Office of the Higher Education Commission (OHEC) (grant number : MRG6280149) and Khon Kaen University.

    The authors declare that there is no conflict of interests regarding the publication of this paper.



    [1] D. R. Wu, J. M. Mendel, Uncertainty measures for interval type-2 fuzzy sets, Inf. Sci., 177 (2007), 5378-5393. doi: 10.1016/j.ins.2007.07.012
    [2] J. M. Mendel, Uncertain Rule-Based Fuzzy Logic Systems: Introduction and New Directions, Englewood Cliffs, NJ, USA: Prentice-Hall, 2001, 1-547.
    [3] P. Melin, L. Astudillo, O. Castillo, et al. Optimal design of type-2 and type-1 fuzzy tracking controllers for autonomous mobile robots under perturbed torques using a new chemical optimization paradigm, Expert Syst. Appl., 40 (2013), 3185-3195. doi: 10.1016/j.eswa.2012.12.032
    [4] H. Hagras, A hierarchical type-2 fuzzy logic control architecture for autonomous mobile robots, IEEE Trans. Fuzzy Syst., 12 (2004), 524-539. doi: 10.1109/TFUZZ.2004.832538
    [5] C. W. Tao, J. S. Taur, C. W. Chang, et al. Simplified type-2 fuzzy sliding controller for wing rocket system, Fuzzy Sets Syst., 207 (2012), 111-129. doi: 10.1016/j.fss.2012.02.015
    [6] D. Bernardo, H. Hagras, E. Tsang, A genetic type-2 fuzzy logic based system for the generation of summarized linguistic predictive models for financial applications, Soft Comput., 17 (2013), 2185-2201.
    [7] Y. Chen, D. Z. Wang, S. C. Tong, Forecasting studies by designing Mamdani interval type-2 fuzzy logic systems: With combination of BP algorithms and KM algorithms, Neurocomputing, 174 (2016), 1133-1146.
    [8] A. Khosravi, S. Nahavandi, D. Creighton, et al., Interval type-2 fuzzy logic systems for load forecasting: a comparative study, IEEE Trans. Power Syst., 27 (2012), 1274-1282. doi: 10.1109/TPWRS.2011.2181981
    [9] S. Barkat, A. Tlemcani, H. Nouri, Noninteracting adaptive control of PMSM using interval type-2 fuzzy logic systems, IEEE Trans. Fuzzy Syst., 19 (2011), 925-936. doi: 10.1109/TFUZZ.2011.2152815
    [10] D. Z. Wang, Y. Chen, Study on permanent magnetic drive forecasting by designing Takagi Sugeno Kang type interval type-2 fuzzy logic systems, Trans. Institute Meas. Control, 40 (2018), 2011-2023.
    [11] Y. Chen, D. Z. Wang, Forecasting by designing Mamdani general type-2 fuzzy logic systems optimized with quantum particle swarm optimization algorithms, Trans. Institute Meas. Control, 41 (2019), 2886-2896.
    [12] P. Melin, O. Mendoza, O. Castillo, An improved method for edge detection based on interval type-2 fuzzy logic, Expert Syst. Appl., 37 (2010), 8527-8535.
    [13] C. S. Lee, M. H. Wang, H. Hagras, Type-2 fuzzy ontology and its application to personal diabetic-diet recommendation, IEEE Trans. Fuzzy Syst., 18 (2010), 316-328.
    [14] G. M. Méndez, M. D. L. A. Hernandez, Hybrid learning for interval type-2 fuzzy logic systems based on orthogonal least-squares and back-propagation methods, Inf. Sci., 179 (2009), 2146-2157.
    [15] G. M. Méndez, M. D. L. A. Hernandez, Hybrid learning mechanism for interval A2-C1 type-2 non-singleton type-2 Takagi-Sugeno-Kang fuzzy logic systems, Inf. Sci., 220 (2013), 149-169. doi: 10.1016/j.ins.2012.01.024
    [16] T. Wang, Y. Chen, S. C. Tong, Fuzzy reasoning models and algorithms on type-2 fuzzy sets, Int. J. Innovative Comput. Inf. Control, 4 (2008), 2451-2460.
    [17] J. M. Mendel, General type-2 fuzzy logic systems made simple: A tutorial, IEEE Trans. Fuzzy Sys., 22 (2014), 1162-1182.
    [18] J. M. Mendel, On KM algorithms for solving type-2 fuzzy set problems, IEEE Trans. Fuzzy Syst., 21 (2013), 426-446. doi: 10.1109/TFUZZ.2012.2227488
    [19] D. R. Wu, J. M. Mendel, Enhanced Karnik-Mendel algorithms, IEEE Trans. Fuzzy Syst., 17 (2009), 923-934. doi: 10.1109/TFUZZ.2008.924329
    [20] J. M. Mendel, F. L. Liu, Super-exponential convergence of the Karnik-Mendel algorithms for computing the centroid of an interval type-2 fuzzy set, IEEE Trans. Fuzzy Syst., 15 (2007), 309-320. doi: 10.1109/TFUZZ.2006.882463
    [21] X. W. Liu, J. M. Mendel, D. R. Wu, Study on enhanced Karnik-Mendel algorithms: Initialization explanations and computation improvements, Inf. Sci., 184 (2012), 75-91. doi: 10.1016/j.ins.2011.07.042
    [22] J. W. Li, R. John, S. Coupland, et al., On Nie-Tan operator and type-reduction of interval type-2 fuzzy sets, IEEE Trans. Fuzzy Syst., 26 (2018), 1036-1039.
    [23] Y. Chen, Study on weighted Nagar-Bardini algorithms for centroid type-reduction of interval type-2 fuzzy logic systems, J. Intell. Fuzzy Syst., 34 (2018), 2417-2428.
    [24] J. M. Mendel, R. I. John, F. L. Liu, Interval type-2 fuzzy logic systems made simple, IEEE Trans. Fuzzy Syst., 14 (2006), 808-821. doi: 10.1109/TFUZZ.2006.879986
    [25] Y. Chen, D. Z. Wang, Study on centroid type-reduction of general type-2 fuzzy logic systems with weighted Nie-Tan algorithms, Soft Comput., 22 (2018), 7659-7678.
    [26] F. L. Liu, An efficient centroid type-reduction strategy for general type-2 fuzzy logic system, Inf. Sci., 178 (2008), 2224-2236. doi: 10.1016/j.ins.2007.11.014
    [27] J. M. Mendel, X. W. Liu, Simplified interval type-2 fuzzy logic systems, IEEE Trans. Fuzzy Syst., 21 (2013), 1056-1069.
    [28] S. Greenfield, F. Chiclana, Accuracy and complexity evaluation of defuzzification strategies for the discretised interval type-2 fuzzy set, Int. J. Approximate Reasoning, 54 (2013), 1013-1033.
    [29] Y. Chen, Study on centroid type-reduction of interval type-2 fuzzy logic systems based on noniterative algorithms, Complexity, 2019 (2019), 1-12.
    [30] T. Kumbasar, Revisiting Karnik-Mendel algorithms in the framework of linear fractional programming, Int. J. Approximate Reasoning, 82 (2017), 1-21.
    [31] S. Greenfield, F. Chiclana, S. Coupland, et al., The collapsing method of defuzzification for discretised interval type-2 fuzzy sets, Inf. Sci., 179 (2009), 2055-2069. doi: 10.1016/j.ins.2008.07.011
    [32] D. R. Wu, Approaches for reducing the computational cost of interval type-2 fuzzy logic systems: overview and comparisons, IEEE Trans. Fuzzy Syst., 21 (2013), 80-99.
    [33] M. A. Khanesar, A. Jalalian, O. Kaynak, Improving the speed of center of set type-reduction in interval type-2 fuzzy systems by eliminating the need for sorting, IEEE Trans. Fuzzy Syst., 25 (2017), 1193-1206. doi: 10.1109/TFUZZ.2016.2602392
    [34] Y. Chen, D. Z. Wang, W. Ning, Forecasting by TSK general type-2 fuzzy logic systems optimized with genetic algorithms, Optimal Control Appl. Methods, 39 (2018), 393-409.
    [35] Y. Chen, D. Z. Wang, Forecasting by general type-2 fuzzy logic systems optimized with QPSO algorithms, Int. J. Control, Automation Syst., 15 (2017), 2950-2958.
    [36] F. Gaxiola, P. Melin, F. Valdez, et al. Optimization of type-2 fuzzy weights in backpropagation learning for neural networks using GAs and PSO, Appl. Soft Comput., 38 (2016), 860-871. doi: 10.1016/j.asoc.2015.10.027
    [37] Q. F. Fan, T. Wang, Y. Chen, et al., Design and application of interval type-2 TSK fuzzy logic system based on QPSO algorithm, Int. J. Fuzzy Syst., 20 (2018), 835-846. doi: 10.1007/s40815-017-0357-3
    [38] C. H. Hsu, C. F. Juang, Evolutionary robot wall-following control using type- 2 fuzzy controller with species-de-activated continuous ACO, IEEE Trans. Fuzzy Syst., 21 (2013), 100-112.
    [39] D. R. Wu, J. M. Mendel, Recommendations on designing practical interval type-2 fuzzy systems, Eng. Appl. Artif. Intell., 85 (2019), 182-193.
    [40] X. L. Liu, S. P. Wan, Combinatorial iterative algorithms for computing the centroid of an interval type-2 fuzzy set, IEEE Trans. Fuzzy Syst., 2019, DOI: 10.1109/TFUZZ.2019.2911918.
    [41] H. Z. Hu, Y. Wang, Y. L. Cai, Advantages of the enhanced opposite direction searching algorithm for computing the centroid of an interval type-2 fuzzy set, Asian J. Control, 14 (2012), 1422-1430.
    [42] J. H. Hu, P. P. Chen, Y. Yang, The fruit fly optimization algorithms for patient-centered care based on interval trapezoidal type-2 fuzzy numbers, Int. J. Fuzzy Syst., 21 (2019), 1270-1287.
    [43] M. Javanmard, H. Mishmast Nehi, A solving method for fuzzy linear programming problem with interval type-2 fuzzy numbers, Int. J. Fuzzy Syst., 21 (2019), 882-891.
    [44] C. Chen, R. John, J. Twycross, et al. A direct approach for determining the switch points in the Karnik-Mendel algorithm, IEEE Trans. Fuzzy Syst., 26 (2018), 1079-1085. doi: 10.1109/TFUZZ.2017.2699168
    [45] O. Castillo, L. Amador-Angulo, J. R. Castro, et al. A comparative study of type-1 fuzzy logic systems, interval type-2 fuzzy logic systems and generalized type-2 fuzzy logic systems in control problems, Inf. Sci., 354 (2016), 257-274.
    [46] L. Cervantes, O. Castillo, Type-2 fuzzy logic aggregation of multiple fuzzy controllers for airplane flight control, Inf. Sci., 324 (2015), 247-256.
    [47] O. Castillo, P. Melin, E. Ontiveros, et al. A high-speed interval type 2 fuzzy system approach for dynamic parameter adaptation in metaheuristics, Eng. Appl. Artificial Intelligence, 85 (2019), 666-680.
    [48] E. Ontiveros-Robles, P. Melin, O. Castillo, Comparative analysis of noise robustness of type 2 fuzzy logic controllers, Kybernetika, 54 (2018), 175-201.
    [49] E. Ontiveros-Robles, P. Melin, O. Castillo, New methodology to approximate type-reduction based on a continuous root-finding karnik mendel algorithm, Algorithms, 10 (2017), 77-96. doi: 10.3390/a10030077
    [50] Y. Chen, Study on sampling-based discrete noniterative algorithms for centroid type-reduction of interval type-2 fuzzy logic systems, Soft Comput., 24 (2020), 11819-11828.
    [51] S. C. Tong, Y. M. Li, Robust adaptive fuzzy backstepping output feedback tracking control for nonlinear system with dynamic uncertainties, Sci. China Inf. Sci., 53 (2010), 307-324. doi: 10.1007/s11432-010-0031-y
    [52] S. C. Tong, Y. M. Li, Observer-based adaptive fuzzy backstepping control of uncertain pure-feedback systems, Sci. China Inf. Sci., 57 (2014), 1-14.
    [53] M. Deveci, I. Z. Akyurt, S. Yavuz, GIS-based interval type-2 fuzzy set for public bread factory site selection, J. Enterprise Inf. Manage., 31 (2018), 820-847.
  • This article has been cited by:

    1. Mengying Ding, Yali Dong, Robust Finite-time Boundedness of Discrete-time Neural Networks with Time-varying Delays, 2021, 17, 2224-3402, 146, 10.37394/23209.2020.17.18
    2. Nguyen T. Thanh, P. Niamsup, Vu N. Phat, New results on finite-time stability of fractional-order neural networks with time-varying delay, 2021, 33, 0941-0643, 17489, 10.1007/s00521-021-06339-2
    3. Mengqin Li, Minghui Jiang, Fengmin Ren, Yadan Zhang, Finite-time stability of a class of nonautonomous systems, 2022, 95, 0020-7179, 2771, 10.1080/00207179.2021.1934735
    4. Chantapish Zamart, Thongchai Botmart, Wajaree Weera, Suphachai Charoensin, New delay-dependent conditions for finite-time extended dissipativity based non-fragile feedback control for neural networks with mixed interval time-varying delays, 2022, 201, 03784754, 684, 10.1016/j.matcom.2021.07.007
    5. Chantapish Zamart, Thongchai Botmart, Wajaree Weera, Prem Junsawang, Finite-time decentralized event-triggered feedback control for generalized neural networks with mixed interval time-varying delays and cyber-attacks, 2023, 8, 2473-6988, 22274, 10.3934/math.20231136
    6. Zhiguang Liu, Xiangyu Xu, Tiejun Zhou, Finite-time multistability of a multidirectional associative memory neural network with multiple fractional orders based on a generalized Gronwall inequality, 2024, 36, 0941-0643, 13527, 10.1007/s00521-024-09736-5
    7. Chantapish Zamart, Thongchai Botmart, Further improvement of finite-time boundedness based nonfragile state feedback control for generalized neural networks with mixed interval time-varying delays via a new integral inequality, 2023, 2023, 1029-242X, 10.1186/s13660-023-02973-7
    8. C. Maharajan, C. Sowmiya, Exponential stability of delay dependent neutral-type descriptor neural networks with uncertain parameters, 2023, 5, 27731863, 100042, 10.1016/j.fraope.2023.100042
  • Reader Comments
  • © 2020 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(3854) PDF downloads(191) Cited by(9)

Figures and Tables

Figures(5)  /  Tables(7)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog