Loading [MathJax]/jax/output/SVG/jax.js
Research article

Improved results on mixed passive and H performance for uncertain neural networks with mixed interval time-varying delays via feedback control

  • Received: 30 October 2020 Accepted: 24 December 2020 Published: 31 December 2020
  • MSC : 34D20, 34H15

  • This paper studies the mixed passive and H performance for uncertain neural networks with interval discrete and distributed time-varying delays via feedback control. The interval discrete and distributed time-varying delay functions are not assumed to be differentiable. The improved criteria of exponential stability with a mixed passive and H performance are obtained for the uncertain neural networks by constructing a Lyapunov-Krasovskii functional (LKF) comprising single, double, triple, and quadruple integral terms and using a feedback controller. Furthermore, integral inequalities and convex combination technique are applied to achieve the less conservative results for a special case of neural networks. By using the Matlab LMI toolbox, the derived new exponential stability with a mixed passive and H performance criteria is performed in terms of linear matrix inequalities (LMIs) that cover H, and passive performance by setting parameters in the general performance index. Numerical examples are shown to demonstrate the benefits and effectiveness of the derived theoretical results. The method given in this paper is less conservative and more general than the others.

    Citation: Sunisa Luemsai, Thongchai Botmart, Wajaree Weera, Suphachai Charoensin. Improved results on mixed passive and H performance for uncertain neural networks with mixed interval time-varying delays via feedback control[J]. AIMS Mathematics, 2021, 6(3): 2653-2679. doi: 10.3934/math.2021161

    Related Papers:

    [1] Boonyachat Meesuptong, Peerapongpat Singkibud, Pantiwa Srisilp, Kanit Mukdasai . New delay-range-dependent exponential stability criterion and $ H_\infty $ performance for neutral-type nonlinear system with mixed time-varying delays. AIMS Mathematics, 2023, 8(1): 691-712. doi: 10.3934/math.2023033
    [2] Chantapish Zamart, Thongchai Botmart, Wajaree Weera, Prem Junsawang . Finite-time decentralized event-triggered feedback control for generalized neural networks with mixed interval time-varying delays and cyber-attacks. AIMS Mathematics, 2023, 8(9): 22274-22300. doi: 10.3934/math.20231136
    [3] Rakkiet Srisuntorn, Wajaree Weera, Thongchai Botmart . Modified function projective synchronization of master-slave neural networks with mixed interval time-varying delays via intermittent feedback control. AIMS Mathematics, 2022, 7(10): 18632-18661. doi: 10.3934/math.20221025
    [4] Yanshou Dong, Junfang Zhao, Xu Miao, Ming Kang . Piecewise pseudo almost periodic solutions of interval general BAM neural networks with mixed time-varying delays and impulsive perturbations. AIMS Mathematics, 2023, 8(9): 21828-21855. doi: 10.3934/math.20231113
    [5] Baoyan Sun, Jun Hu, Yan Gao . Variance-constrained robust $ H_{\infty} $ state estimation for discrete time-varying uncertain neural networks with uniform quantization. AIMS Mathematics, 2022, 7(8): 14227-14248. doi: 10.3934/math.2022784
    [6] Tian Xu, Ailong Wu . Stabilization of nonlinear hybrid stochastic time-delay neural networks with Lévy noise using discrete-time feedback control. AIMS Mathematics, 2024, 9(10): 27080-27101. doi: 10.3934/math.20241317
    [7] Jenjira Thipcha, Presarin Tangsiridamrong, Thongchai Botmart, Boonyachat Meesuptong, M. Syed Ali, Pantiwa Srisilp, Kanit Mukdasai . Robust stability and passivity analysis for discrete-time neural networks with mixed time-varying delays via a new summation inequality. AIMS Mathematics, 2023, 8(2): 4973-5006. doi: 10.3934/math.2023249
    [8] Arthit Hongsri, Wajaree Weera, Thongchai Botmart, Prem Junsawang . Novel non-fragile extended dissipative synchronization of T-S fuzzy complex dynamical networks with interval hybrid coupling delays. AIMS Mathematics, 2023, 8(12): 28601-28627. doi: 10.3934/math.20231464
    [9] Han Geng, Huasheng Zhang . A new $ H_{\infty} $ control method of switched nonlinear systems with persistent dwell time: $ H_{\infty} $ fuzzy control criterion with convergence rate constraints. AIMS Mathematics, 2024, 9(9): 26092-26113. doi: 10.3934/math.20241275
    [10] Xue Zhang, Yu Xue . A novel $ H_{\infty} $ state observer design method for genetic regulatory networks with time-varying delays. AIMS Mathematics, 2024, 9(2): 3763-3787. doi: 10.3934/math.2024185
  • This paper studies the mixed passive and H performance for uncertain neural networks with interval discrete and distributed time-varying delays via feedback control. The interval discrete and distributed time-varying delay functions are not assumed to be differentiable. The improved criteria of exponential stability with a mixed passive and H performance are obtained for the uncertain neural networks by constructing a Lyapunov-Krasovskii functional (LKF) comprising single, double, triple, and quadruple integral terms and using a feedback controller. Furthermore, integral inequalities and convex combination technique are applied to achieve the less conservative results for a special case of neural networks. By using the Matlab LMI toolbox, the derived new exponential stability with a mixed passive and H performance criteria is performed in terms of linear matrix inequalities (LMIs) that cover H, and passive performance by setting parameters in the general performance index. Numerical examples are shown to demonstrate the benefits and effectiveness of the derived theoretical results. The method given in this paper is less conservative and more general than the others.



    During the past few decades, many researchers have studied neural networks because of their applications in many fields such as parallel computation, fault diagnosis, image processing, optimization problems, industrial automation, and so on [1,2,3,4,5]. To acquire the above applications, we need to first analyze the theoretical stability for the equilibrium point of neural networks. Further, the important factor affecting system analysis is time delay. It is well known that time delay is a normal phenomenon that appears in neural networks since the neural networks consist of a large number of neurons that connect and communicate with each other into a diversity of axon sizes and lengths. Moreover, the existence of time delay causing poor control performance, divergence, oscillation, and instability to the system [6]. Stability analysis of neural networks with constant, discrete, and distributed time-varying delays has received considerable attentions [7,8,9]. For example, [7], the delay-dependent criterion for exponential stability analysis of neural networks with time-varying delays satisfying 0η(t)η,˙η(t)μ is obtained. In [8], the problem of dissipativity analysis for neural networks with time-varying delays is investigated. However, practically time delay can occur in an irregular fashion such as sometimes the time-varying delays are not differentiable. So, it inspires us to study neural networks without the restriction on the derivative of time-varying delays.

    On the other side, since external perturbation, uncertain or slowly varying parameters, an accurate mathematical model does not get easy. Data tends to be uncertain in many applications [10,11,12]. Therefore, it is important to guarantee that the model is stable with respect to the uncertainties. Also, uncertainty in neural networks cannot be avoided. Consequently the problem of robust stability analysis for uncertain neural networks has many studied. For example, Subramanian et al. [13] investigated the robust stabilization of uncertain neural networks with two additive time-varying delays based on Wirtinger-based double integral inequality. In [14], Zeng et al. studied the robust passivity analysis of uncertain neural networks with discrete and distributed delays by constructing an augmented Lyapunov functional and combining a new integral inequality with the reciprocally convex approach.

    It is well known that passivity is a special case and a general theory of dissipativeness and it performs an influential part in the designing of linear and nonlinear systems. It is widely applied in many areas such as sliding mode control [15], fuzzy control [16], network control [17], and signal processing [18]. The main property of passivity is that can keep the system internally stable. Recently, the passivity problem has been studied in [14,19,20,21,22]. In addition, the H theory is very important due to the H control design that exposes the control problem as a mathematical optimization problem to find the controller solution. The H approaches are used in control theory to synthesize controllers achieving stabilization with guaranteed performance [23,24]. The problem of mixed H and passivity analysis was first studied in [25,26]. It has received a lot of attention from many researchers. For example, the mixed passive and H synchronization problems of complex dynamical networks have been analyzed in [27,28]. And, the combined H and passivity state estimation of memristive neural networks was studied in [29]. Nevertheless, a mixed passive and H analysis problem for uncertain neural networks with interval discrete and distributed time-varying delays has been few considered which is our motivation.

    Inspired by above discussions, the problem of mixed passive and H performance for uncertain neural networks with interval discrete and distributed time-varying delays via feedback control is studied. The main contributions of this paper are three aspects.

    In this work, the system consists of the interval discrete and distributed time-varying delays such that does not necessitate being differentiable functions, which mean that a fast interval discrete and distributed time-varying delays is approved. The lower bound of the delays does not restrict to be 0, the activation functions are different, and the output is general.

    By using the Lyapunov-Krasovskii stability theory, the new results of the exponential stability with a mixed passive and H performance for the uncertain neural networks are obtained. Based on the weighting parameter, the results are more general such that H performance or passive performance for the uncertain neural networks are included.

    Different from the methods in [30,31,32], the Lyapunov-Krasovskii functional comprising single, double, triple, and quadruple integral terms and integral inequalities are employed. Convex combination idea and zero equation are used. The method used in this paper reveals less conservative results when comparing with existing results [30,31,32].

    This paper is formed in five sections as follows. In Section 2, network model and preliminaries are provided. Section 3 shows exponential stability analysis with a mixed passive and H performance of the uncertain neural network system, and the stability analysis of a special case neural network. Numerical examples are given in Section 4 and conclusions are addressed in Section 5.

    Notations

    Throughout this paper, R and Rn represent the set of real numbers and the n-dimensional Euclidean spaces, respectively. M>()0 means that the symmetric matrix M is positive (semi-positive) definite. M<()0 denotes that the symmetric matrix M is negative (semi-negative) definite. MT and M1 denote the transpose and the inverse of matrix M, respectively. λmax(M) and λmin(M) denote the maximum eigenvalue and the minimum eigenvalue of matrix M, respectively. The symbol represents the symmetric block in a symmetric matrix. I is the identity matrix with appropriate dimensions. ei represents the unit column vector having one element on its ith row and zeros elsewhere. C([a1,a2],Rn) denotes the set of continuous functions mapping the interval [a1,a2] to Rn. L2[0,) represents the space of functions ζ:R+Rn with the norm ζL2=[0|ζ(θ)|2dθ]12. For ϑRn, the norm of ϑ, denoted by ϑ, is defined by ϑ=[ni=1|ϑi|2]1/2; ϑ(t+ν)cl =max{supmax{σ2,δ2}ν0ϑ(t+ν)2,supmax{σ2,δ2}ν0˙ϑ(t+ν)2}.

    We consider the uncertain neural network model with interval discrete and distributed time-varying delays of the form

    ˙x(t)=(A+ΔA(t))x(t)+(B+ΔB(t))f(x(t))+(C+ΔC(t))k(x(tσ(t)))+(D+ΔD(t))tδ1(t)tδ2(t)h(x(s))ds+Eω(t)+U(t),z(t)=C1x(t)+C2x(tσ(t))+C3tδ1(t)tδ2(t)h(x(s))ds+C4ω(t),x(t)=ϕ(t),t[ϱ,0], (2.1)

    where x(t)=[x1(t),x2(t),,xn(t)]TRn is the neuron state vector, f(x(t)),k(x(t)),h(x(t))Rn are the neuron activation functions, z(t)Rn is the output vector, ω(t)Rn is the input vector such that ω(t)L2[0,), U(t)Rn is the control input, A=diag{a1,a2,,an}>0, B is the connection weight matrix, C is the discretely delayed connection weight matrix, D is the distributively delayed connection weight matrix, E,C1,C2,C3,C4 are given constant matrices, ϕ(t)C[[ϱ,0],Ren] is the initial function. σ(t) is the interval discrete time-varying delay that satisfies 0σ1σ(t)σ2 where σ1,σ2R. δi(t) (i=1,2) is the interval distributed time-varying delay that satisfies 0δ1δ1(t)δ2(t)δ2 where δ1,δ2R. ϱ=max{σ2,δ2} is known real constant, the time-varying uncertainties matrices ΔA(t),ΔB(t),ΔC(t), and ΔD(t) are given by

    ΔA(t)=J1S1(t)Σ1,ΔB(t)=J2S2(t)Σ2,ΔC(t)=J3S3(t)Σ3,ΔD(t)=J4S4(t)Σ4,

    and J1,J2,J3,J4,Σ1,Σ2,Σ3 and Σ4 are known constant matrices with appropriate dimensions, S1(t),S2(t),S3(t),S4(t) are unknown uncertain matrices satisfying

    ST1(t)S1(t)I,ST2(t)S2(t)I,ST3(t)S3(t)I,ST4(t)S4(t)I.

    The neuron activation functions f(x(t)),k(x(t)) and h(x(t)) satisfy the following conditions:

    (A1) f is continuous and satisfies

    Fifi(α1)fi(α2)α1α2F+i

    for all α1α2, and Fi,F+iR, fi(0)=0.

    (A2) k is continuous and satisfies

    Kiki(α1)ki(α2)α1α2K+i

    for all α1α2, and Ki,K+iR, ki(0)=0.

    (A3) h is continuous and satisfies

    Hihi(α1)hi(α2)α1α2H+i

    for all α1α2, and Hi,H+iR, hi(0)=0.

    The state feedback is considered with

    U(t)=Kx(t).

    Substitute U(t)=Kx(t) into (2.1), we gain

    ˙x(t)=(KAΔA(t))x(t)+(B+ΔB(t))f(x(t))+(C+ΔC(t))×k(x(tσ(t)))+(D+ΔD(t))tδ1(t)tδ2(t)h(x(s))ds+Eω(t),z(t)=C1x(t)+C2x(tσ(t))+C3tδ1(t)tδ2(t)h(x(s))ds+C4ω(t),x(t)=ϕ(t),t[ϱ,0]. (2.2)

    Definition 2.1. [28] The uncertain NNs (2.2) with ω(t)=0 is exponentially stable, if there exist constants b1>0 and b2>0 such that

    x(t)2b1eb2tx(ν)cl.

    Definition 2.2. [28] For a given scalar υ[0,1], the uncertain NNs (2.2) is exponentially stable and meets a predefined passive and H performance index γ, if the following conditions can be ensured simultaneously:

    (1) the uncertain NNs (2.2) is exponentially stable in sense of Definition 2.1.

    (2) under zero initial condition, there exists a scalar γ>0 such that the following inequality is satisfied:

    Tp0[υzT(t)z(t)+2(1υ)γzT(t)ω(t)]dtγ2Tp0[ωT(t)ω(t)]dt, (2.3)

    for any Tp0 and any non-zero ω(t)L2[0,).

    Remark 1. The condition (2.3) includes passive performance index and H performance index. If υ=1, the condition (2.3) reduces to the H performance index; and if υ=0, the condition (2.3) reduces to the passive performance index; when υ takes the value in (0,1), then the condition (2.3) becomes to the mixed passive and H performance index.

    Lemma 2.3. [33,34] Suppose 0η1<η2 and x(t)Rn, for any matrix M>0 the following inequalities hold:

    (η2η1)tη1tη2xT(s)Mx(s)dstη1tη2xT(s)dsMtη1tη2x(s)ds,(η22η21)2η1η2tt+βxT(s)Mx(s)dsdβη1η2tt+βxT(s)dsdβ×Mη1η2tt+βx(s)dsdβ,η3260η20βtt+λxT(s)Mx(s)dsdλdβ0η20βtt+λxT(s)dsdλdβ×M0η20βtt+λx(s)dsdλdβ.

    Lemma 2.4. [35] For a matrix M>0, a differentiable function {x(α)|α[a1,a2]}, the following inequality holds:

    a2a1˙xT(α)M˙x(α)dα1a2a1[x(a2)x(a1)]TM[x(a2)x(a1)]+3a2a1×[x(a2)+x(a1)2a2a1a2a1x(α)dα]TM[x(a2)+x(a1)2a2a1a2a1x(α)dα].

    Lemma 2.5. [36] For given matrices P, Q and R with RTRI and a scalar α>0, the following inequality holds:

    PRQ+(PRQ)TαPPT+α1QTQ.

    Lemma 2.6. [37] Let P,Q,R be given matrices such that R>0, then

    [QPTPR]<0Q+PTR1P<0.

    In this section, we will firstly find the sufficient conditions which guarantee the neural networks without parameter uncertainties to be exponentially stable with a mixed passive and H performance. That is we consider the following model

    ˙x(t)=(KA)x(t)+Bf(x(t))+Ck(x(tσ(t)))+Dtδ1(t)tδ2(t)h(x(s))ds+Eω(t),z(t)=C1x(t)+C2x(tσ(t))+C3tδ1(t)tδ2(t)h(x(s))ds+C4ω(t),x(t)=ϕ(t),t[ϱ,0]. (3.1)

    In this paper, we define the denotations as follows

    ˜Fi=max{|Fi|,|F+i|},˜Ki=max{|Ki|,|K+i|},˜Hi=max{|Hi|,|H+i|},F1=diag{F1F+1,F2F+2,,FnF+n},F2=diag{F1+F+12,F2+F+22,,Fn+F+n2},K1=diag{K1K+1,K2K+2,,KnK+n},K2=diag{K1+K+12,K2+K+22,,Kn+K+n2},H1=diag{H1H+1,H2H+2,,HnH+n},H2=diag{H1+H+12,H2+H+22,,Hn+H+n2},
    ξT(t)=[xT(t),˙xT(t),xT(tσ1),xT(tσ2),xT(tσ(t)),fT(x(t)),kT(x(tσ(t))),hT(x(t)),1σ1ttσ1xT(s)ds,1σ2ttσ2xT(s)ds,1σ(t)σ1tσ1tσ(t)xT(s)ds,1σ2σ(t)tσ(t)tσ2xT(s)ds,tδ1(t)tδ2(t)hT(x(s))ds,σ1σ(t)tt+βxT(s)dsdβ,σ(t)σ2tt+βxT(s)dsdβ,ωT(t)].

    Theorem 3.1. For given scalars σ1,σ2,δ1,δ2,β1,β2,γ>0, and υ[0,1], if there exist eleven n×n matrices P>0,Q1>0,Q2>0,R1>0,R2>0,U>0,L>0,X1>0,X2>0,N>0,Z and three n×n positive diagonal matrices Y1>0,Y2>0,Y3>0 such that the following LMIs hold:

    Θ+Θ1<0, (3.2)
    Θ+Θ2<0, (3.3)

    wherein,

    Θ1=e15X1eT15,Θ2=e14X1eT14,Θ=[Θ(1,1)Θ(1,2)Θ(2,2)],

    with:

    Θ(1,1)=[θ1,1θ1,22R12R2υCT1C2θ1,6β1NTCH2Y3θ2,2000β2NTBβ2NTC0θ3,302U000θ4,42U000θ5,50K2Y20Y100Y20θ8,8],
    Θ(1,2)=[6R16R200θ1,13σ22σ212X2σ22σ212X2θ1,160000β2NTD00β2NTE6R106U0000006R206U0000006U6UυCT2C300θ5,16000000000000000000000000],
    Θ(2,2)=[12R1000000012R200000012U0000012U0000θ13,1300θ13,16θ14,14X20θ15,150θ16,16],

    in which:

    θ1,1=Q1+Q24R14R2+υCT1C1F1Y1H1Y3+2β1Z2β1NTA+(σ22σ21)24X1(σ22σ21)24X2,θ1,2=Pβ1NT+β2ZTβ2NTA,θ1,6=F2Y1+β1NTB,θ1,13=υCT1C3+β1NTD,θ1,16=υCT1C4(1υ)γCT1+β1NTE,θ2,2=σ21R1+σ22R2+(σ2σ1)2U2β2NT+(σ32σ31)236X2,θ3,3=Q14R14U,θ4,4=Q24R24U,θ5,5=8UK1Y2+υCT2C2,θ5,16=υCT2C4(1υ)γCT2,θ8,8=(δ2δ1)2LY3,θ13,13=L+υCT3C3,θ13,16=υCT3C4(1υ)γCT3,θ14,14=X1X2,θ15,15=X1X2,θ16,16=υCT4C42(1υ)γCT4γ2I,

    then, the NNs (3.1) is exponentially stable with a mixed passive and H performance. Moreover, the controller is in the form

    K=N1Z.

    Proof. Consider the model (3.1) with the following Lyapunov-Krasovskii functional

    V(x(t),t)=9i=1Vi(x(t),t),

    where

    V1(x(t),t)=xT(t)Px(t),V2(x(t),t)=ttσ1xT(s)Q1x(s)ds,V3(x(t),t)=ttσ2xT(s)Q2x(s)ds,V4(x(t),t)=σ10σ1tt+s˙xT(τ)R1˙x(τ)dτds,V5(x(t),t)=σ20σ2tt+s˙xT(τ)R2˙x(τ)dτds,V6(x(t),t)=(σ2σ1)σ1σ2tt+s˙xT(τ)U˙x(τ)dτds,V7(x(t),t)=(δ2δ1)δ1δ2tt+shT(x(τ))Lh(x(τ))dτds,V8(x(t),t)=(σ22σ21)2σ1σ20βtt+λxT(s)X1x(s)dsdλdβ,V9(x(t),t)=(σ32σ31)6σ1σ20β0λtt+φ˙xT(s)X2˙x(s)dsdφdλdβ. (3.4)

    We find time derivatives of Vi(x(t),t),i=1,2,,9, along the trajectories of (3.1), we achieve

    ˙V1(x(t),t)=xT(t)P˙x(t)+˙xT(t)Px(t), (3.5)
    ˙V2(x(t),t)=xT(t)Q1x(t)xT(tσ1)Q1x(tσ1), (3.6)
    ˙V3(x(t),t)=xT(t)Q2x(t)xT(tσ2)Q2x(tσ2), (3.7)
    ˙V4(x(t),t)=σ10σ1[˙xT(t)R1˙x(t)˙xT(t+s)R1˙x(t+s)]ds=σ21˙xT(t)R1˙x(t)σ1ttσ1˙xT(α)R1˙x(α)dα, (3.8)
    ˙V5(x(t),t)=σ20σ2[˙xT(t)R2˙x(t)˙xT(t+s)R2˙x(t+s)]ds=σ22˙xT(t)R2˙x(t)σ2ttσ2˙xT(α)R2˙x(α)dα, (3.9)
    ˙V6(x(t),t)=(σ2σ1)σ1σ2[˙xT(t)U˙x(t)˙xT(t+s)U˙x(t+s)]ds=(σ2σ1)2˙xT(t)U˙x(t)(σ2σ1)tσ1tσ2˙xT(α)U˙x(α)dα, (3.10)
    ˙V7(x(t),t)=(δ2δ1)δ1δ2[hT(x(t))Lh(x(t))hT(x(t+s))Lh(x(t+s))]ds=(δ2δ1)2hT(x(t))Lh(x(t))(δ2δ1)tδ1tδ2hT(x(α))Lh(x(α))dα(δ2δ1)2hT(x(t))Lh(x(t))(δ2(t)δ1(t))tδ1(t)tδ2(t)hT(x(α))Lh(x(α))dα, (3.11)
    ˙V8(x(t),t)=(σ22σ21)2σ1σ20β[xT(t)X1x(t)xT(t+λ)X1x(t+λ)]dλdβ=(σ22σ21)24xT(t)X1x(t)(σ22σ21)2σ1σ2tt+βxT(s)X1x(s)dsdβ, (3.12)
    ˙V9(x(t),t)=(σ32σ31)6×σ1σ20β0λ[˙xT(t)X2˙x(t)˙xT(t+φ)X2˙x(t+φ)]dφdλdβ=(σ32σ31)236˙xT(t)X2˙x(t)(σ32σ31)6σ1σ20βtt+λ˙xT(s)X2˙x(s)dsdλdβ. (3.13)

    Utilizing Lemma 2.4., the following inequalities are easily obtained:

    σ1ttσ1˙xT(α)R1˙x(α)dα[x(t)x(tσ1)]TR1[x(t)x(tσ1)]3[x(t)+x(tσ1)2σ1ttσ1x(α)dα]T×R1[x(t)+x(tσ1)2σ1ttσ1x(α)dα], (3.14)
    σ2ttσ2˙xT(α)R2˙x(α)dα[x(t)x(tσ2)]TR2[x(t)x(tσ2)]3[x(t)+x(tσ2)2σ2ttσ2x(α)dα]T×R2[x(t)+x(tσ2)2σ2ttσ2x(α)dα], (3.15)
    (σ2σ1)tσ1tσ2˙xT(α)U˙x(α)dα[x(tσ(t))x(tσ2)]TU[x(tσ(t))x(tσ2)]3[x(tσ(t))+x(tσ2)2σ2σ(t)tσ(t)tσ2x(α)dα]T×U[x(tσ(t))+x(tσ2)2σ2σ(t)tσ(t)tσ2x(α)dα][x(tσ1)x(tσ(t))]TU[x(tσ1)x(tσ(t))]3[x(tσ1)+x(tσ(t))2σ(t)σ1tσ1tσ(t)x(α)dα]T×U[x(tσ1)+x(tσ(t))2σ(t)σ1tσ1tσ(t)x(α)dα]. (3.16)

    By utilizing Lemma 2.3, we achieve the following inequalities

    (δ2(t)δ1(t))tδ1(t)tδ2(t)hT(x(α))Lh(x(α))dαtδ1(t)tδ2(t)hT(x(α))dαLtδ1(t)tδ2(t)h(x(α))dα, (3.17)
    (σ22σ21)2σ1σ2tt+βxT(s)X1x(s)dsdβσ(t)σ2tt+βxT(s)dsdβX1σ(t)σ2tt+βx(s)dsdβεσ(t)σ2tt+βxT(s)dsdβX1σ(t)σ2tt+βx(s)dsdβ(1ε)σ1σ(t)tt+βxT(s)dsdβX1σ1σ(t)tt+βx(s)dsdβσ1σ(t)tt+βxT(s)dsdβX1σ1σ(t)tt+βx(s)dsdβ, (3.18)

    where ε=σ2(t)σ21σ22σ21.

    (σ32σ31)6σ1σ20βtt+λ˙xT(s)X2˙x(s)dsdλdβ[σ22σ212xT(t)σ(t)σ2tt+βxT(s)dsdβσ1σ(t)tt+βxT(s)dsdβ]×X2[σ22σ212x(t)σ(t)σ2tt+βx(s)dsdβσ1σ(t)tt+βx(s)dsdβ]. (3.19)

    It follows from (A1) that [fi(xi(t))Fixi(t)][fi(xi(t))F+ixi(t)]0 for every i=1,2,,n, which are equivalent to

    [x(t)f(x(t))]T[FiF+ieieTiFi+F+i2eieTiFi+F+i2eieTieieTi][x(t)f(x(t))]0,

    for every i=1,2,,n.

    Define Y1=diag{y1,y2,,yn}>0, then

    ni=1yi[x(t)f(x(t))]T[FiF+ieieTiFi+F+i2eieTiFi+F+i2eieTieieTi][x(t)f(x(t))]0,

    which is equivalent to

    [x(t)f(x(t))]T[F1Y1F2Y1F2Y1Y1][x(t)f(x(t))]0. (3.20)

    Similarly, from (A2), (A3) define Y2=diag{˜y1,˜y2,,˜yn}>0,

    Y3=diag{ˆy1,ˆy2,,ˆyn}>0 we have

    [x(tσ(t))k(x(tσ(t)))]T[K1Y2K2Y2K2Y2Y2][x(tσ(t))k(x(tσ(t)))]0, (3.21)
    [x(t)h(x(t))]T[H1Y3H2Y3H2Y3Y3][x(t)h(x(t))]0. (3.22)

    We have zero equation as follows

    0=2[xT(t)β1NT+˙xT(t)β2NT][˙x(t)+(N1ZA)x(t)+Bf(x(t))+Ck(x(tσ(t)))+Dtδ1(t)tδ2(t)h(x(s))ds+Eω(t)].

    Adding above zero equation to ˙V(x(t),t), we obtain the following inequality from (2.3), (3.5)–(3.22)

    ˙V(x(t),t)+υzT(t)z(t)2(1υ)γzT(t)ω(t)γ2ωT(t)ω(t)ξT(t)(εΘ(1)+(1ε)Θ(2))ξ(t), (3.23)

    where, Θ(i)=Θ+Θi (i=1,2) with Θ and Θi are defined in (3.2), (3.3).

    Since 0ε1, the term εΘ(1)+(1ε)Θ(2) is a convex combination of Θ(1) and Θ(2). The combinations are negative definite only if

    Θ(1)<0, (3.24)
    Θ(2)<0. (3.25)

    So, (3.24) and (3.25) are equivalent to (3.2) and (3.3), respectively.

    Hence, we obtain

    ˙V(x(t),t)+υzT(t)z(t)2(1υ)γzT(t)ω(t)γ2ωT(t)ω(t)<0. (3.26)

    Under the zero initial condition, for any Tp we find that

    Tp0υzT(t)z(t)2(1υ)γzT(t)ω(t)γ2ωT(t)ω(t)dtTp0˙V(x(t),t)+υzT(t)z(t)2(1υ)γzT(t)ω(t)γ2ωT(t)ω(t)dt<0,

    that is

    Tp0υzT(t)z(t)2(1υ)γzT(t)ω(t)dtγ2Tp0ωT(t)ω(t)dt.

    In this case, the condition (2.3) is guaranteed for any non-zero ω(t)L2[0,). If ω(t)=0, in sense of equation (3.26), there exists a scalar υ1>0 such that

    ˙V(x(t),t)<υ1xT(t)x(t). (3.27)

    By the definitions of Vi(x(t),t), it is easy to derive the following inequalities:

    V1(x(t),t)λmax(P)x(t)2,V4(x(t),t)σ21ttσ1˙xT(α)R1˙x(α)dα,V5(x(t),t)σ22ttσ2˙xT(α)R2˙x(α)dα,V6(x(t),t)(σ2σ1)2ttσ2˙xT(τ)U˙x(τ)dτ,V7(x(t),t)(δ2δ1)2ttδ2hT(x(τ))Lh(x(τ))dτ,V8(x(t),t)(σ22σ21)24ttσ2xT(s)X1x(s)ds,V9(x(t),t)(σ32σ31)236ttσ2˙xT(s)X2˙x(s)ds. (3.28)

    We are now ready to deal with the exponential stability of (3.1). Consider the Lyapunov–Krasovskii functional e2ctV(x(t),t), where c is a constant. Using (3.27), (3.28), we have

    ddte2ctV(x(t),t)=e2ct˙V(x(t),t)+2ce2ctV(x(t),t)<e2ct[υ1+2c(λmax(P)+σ1λmax(Q1)+σ2λmax(Q2)+σ31λmax(R1)+σ32λmax(R2)+σ2(σ2σ1)2λmax(U)+δ2(δ2δ1)2λmax(L)maxi{1,2,,n}(˜H2i)+σ2(σ22σ21)24λmax(X1)+σ2(σ32σ31)236λmax(X2))]x(t+ν)cl. (3.29)

    Let

    μ1=λmax(P)+σ1λmax(Q1)+σ2λmax(Q2)+σ31λmax(R1)+σ32λmax(R2)+σ2(σ2σ1)2λmax(U)+δ2(δ2δ1)2λmax(L)maxi{1,2,,n}(˜H2i)+σ2(σ22σ21)24λmax(X1)+σ2(σ32σ31)236λmax(X2).

    Now, we take c to be a constant satisfying cυ12μ1, and then achieve from (3.29) that

    ddte2ctV(x(t),t)0, (3.30)

    which, together with (3.4) and (3.28), imply that

    e2ctV(x(t),t)V(x(0),0)=9i=1Vi(x(0),0)[λmax(P)x(0)2+0σ1xT(s)Q1x(s)ds+0σ2xT(s)Q2x(s)ds+σ210σ1˙xT(τ)R1˙x(τ)dτ+σ220σ2˙xT(τ)R2˙x(τ)dτ+(σ2σ1)20σ2˙xT(τ)U˙x(τ)dτ+(δ2δ1)20δ2hT(x(τ))Lh(x(τ))dτ+(σ22σ21)240σ2xT(s)X1x(s)ds+(σ32σ31)2360σ2˙xT(s)X2˙x(s)ds]μ0x(ν)cl,

    where

    μ0=λmax(P)+σ1λmax(Q1)+σ2λmax(Q2)+σ31λmax(R1)+σ32λmax(R2)+σ2(σ2σ1)2λmax(U)+δ2(δ2δ1)2λmax(L)maxi{1,2,,n}(˜H2i)+σ2(σ22σ21)24λmax(X1)+σ2(σ32σ31)236λmax(X2),

    and therefore

    V(x(t),t)μ0e2ctx(ν)cl.

    Noticing λmin(P)x(t)2V(x(t),t), we obtain

    x(t)2μ0λmin(P)e2ctx(ν)cl. (3.31)

    Letting b1=μ0λmin(P) and b2=2c, we can rewrite (3.31) as

    x(t)2b1eb2tx(ν)cl.

    Hence, the NNs (3.1) is exponentially stable with a mixed passive and H performance index γ. The proof is completed.

    In the second part, the criteria of exponential stability with a mixed passive and H performance for the uncertain neural networks are obtained by using similar proof of Theorem 3.1 together with Lemma 2.5, 2.6.

    Theorem 3.2. For given scalars σ1,σ2,δ1,δ2,β1,β2,γ>0, and υ[0,1], if there exist eleven n×n matrices P>0,Q1>0,Q2>0,R1>0,R2>0,U>0,L>0,X1>0,X2>0,N>0,Z, positive diagonal matrices Y1>0,Y2>0,Y3>0 and eight positive constants αi>0(i=1,2,,8) such that the following LMIs hold:

    Ψ+Θ1<0, (3.32)
    Ψ+Θ2<0, (3.33)

    wherein,

    Θ1=e15X1eT15,Θ2=e14X1eT14,ˉΘ=[ˉΘ(1,1)Θ(1,2)ˉΘ(2,2)],
    Ψ=[ˉΘNTJ1NTJ1NTJ2NTJ2NTJ3NTJ3NTJ4NTJ4α1I0000000α2I000000α3I00000α4I0000α5I000α6I00α7I0α8I],

    with: Θ(1,2) is defined in Theorem 3.1,

    ˉΘ(1,1)=[ˉθ1,1θ1,22R12R2υCT1C2θ1,6β1NTCH2Y3ˉθ2,2000β2NTBβ2NTC0θ3,302U000θ4,42U000θ5,50K2Y20ˉθ6,600ˉθ7,70θ8,8],
    ˉΘ(2,2)=[12R1000000012R200000012U0000012U0000ˉθ13,1300θ13,16θ14,14X20θ15,150θ16,16],

    in which:

    ˉθ1,1=Q1+Q24R14R2+υCT1C1F1Y1H1Y3+2β1Z2β1NTA+(σ22σ21)24X1(σ22σ21)24X2+α1β21ΣT1Σ1,ˉθ2,2=σ21R1+σ22R2+(σ2σ1)2U2β2NT+(σ32σ31)236X2+α2β22ΣT1Σ1,ˉθ6,6=Y1+α3β21ΣT2Σ2+α4β22ΣT2Σ2,ˉθ7,7=Y2+α5β21ΣT3Σ3+α6β22ΣT3Σ3,ˉθ13,13=L+υCT3C3+α7β21ΣT4Σ4+α8β22ΣT4Σ4,

    then, the uncertain NNs (2.2) is exponentially stable with a mixed passive and H performance index γ.

    Proof. We use the same Lyapunov-Krasovskii functional in Theorem 3.1, such that matrices A,B,C,D are replaced by A+J1S1(t)Σ1,B+J2S2(t)Σ2,C+J3S3(t)Σ3,D+J4S4(t)Σ4, respectively. Then applying Lemma 2.5, we get

    xT(t)(2β1NTΔA(t))x(t)α1xT(t)β1ΣT1Σ1β1x(t)+α11xT(t)NTJ1JT1Nx(t),xT(t)(β2NTΔA(t))˙x(t)+˙xT(t)(β2ΔAT(t)N)x(t)α2˙xT(t)β2ΣT1Σ1β2˙x(t)+α12xT(t)NTJ1JT1Nx(t),xT(t)β1NTΔB(t)f(x(t))+fT(x(t))β1ΔBT(t)Nx(t)α3fT(x(t))β1ΣT2Σ2β1f(x(t))+α13xT(t)NTJ2JT2Nx(t),˙xT(t)β2NTΔB(t)f(x(t))+fT(x(t))β2ΔBT(t)N˙x(t)α4fT(x(t))β2ΣT2Σ2β2f(x(t))+α14˙xT(t)NTJ2JT2N˙x(t),xT(t)β1NTΔC(t)k(x(tσ(t)))+kT(x(tσ(t)))β1ΔCT(t)Nx(t)α5kT(x(tσ(t)))β1ΣT3Σ3β1k(x(tσ(t)))+α15xT(t)NTJ3JT3Nx(t),˙xT(t)β2NTΔC(t)k(x(tσ(t)))+kT(x(tσ(t)))β2ΔCT(t)N˙x(t)α6kT(x(tσ(t)))β2ΣT3Σ3β2k(x(tσ(t)))+α16˙xT(t)NTJ3JT3N˙x(t),xT(t)β1NTΔD(t)tδ1(t)tδ2(t)h(x(s))ds+tδ1(t)tδ2(t)hT(x(s))dsβ1ΔDT(t)Nx(t)α7tδ1(t)tδ2(t)hT(x(s))dsβ1ΣT4Σ4β1tδ1(t)tδ2(t)h(x(s))ds+α17xT(t)NTJ4JT4Nx(t),˙xT(t)β2NTΔD(t)tδ1(t)tδ2(t)h(x(s))ds+tδ1(t)tδ2(t)hT(x(s))dsβ2ΔDT(t)N˙x(t)α8tδ1(t)tδ2(t)hT(x(s))dsβ2ΣT4Σ4β2tδ1(t)tδ2(t)h(x(s))ds+α18˙xT(t)NTJ4JT4N˙x(t).

    Then applying the similar proof of Theorem 3.1 and Lemma 2.6, we have

    ˙V(t)+υzT(t)z(t)2(1υ)γzT(t)ω(t)γ2ωT(t)ω(t)ξT(t)(εΨ(1)+(1ε)Ψ(2))ξ(t),

    where, Ψ(i)=Ψ+Θi (i=1,2) with Ψ and Θi are defined in (3.32), (3.33).

    Since 0ε1, the term εΨ(1)+(1ε)Ψ(2) is a convex combination of Ψ(1) and Ψ(2). The combinations are negative definite only if

    Ψ(1)<0, (3.34)
    Ψ(2)<0. (3.35)

    Therefore, (3.34) and (3.35) are equivalent to (3.32) and (3.33), respectively. This completes the proof.

    In the third part, we will investigate the stability of a special model of the neural networks, in order to compare the maximum delay with existing results.

    Remark 2. We consider the following neural network model as a special case of the system (2.1)

    ˙x(t)=Ax(t)+Bf(x(t))+Ck(x(tσ(t))). (3.36)

    Corollary 3.3. For given scalars σ1,σ2,β1 and β2, if there exist nine n×n matrices P>0,Q1>0,Q2>0,R1>0,R2>0,U>0,X1>0,X2>0,N>0 and two n×n positive diagonal matrices Y1>0,Y2>0 such that the following LMIs hold:

    Π+Π1<0, (3.37)
    Π+Π2<0, (3.38)

    where

    Π1=e13X1eT13,Π2=e12X1eT12,Π=[θ(i,j)]13×13,

    with (θ(i,j))T=θ(j,i),

    θ(1,1)=Q1+Q24R14R2F1Y12β1NTA+(σ22σ21)24X1(σ22σ21)24X2,θ(1,2)=Pβ1NTβ2NTA,θ(1,3)=2R1,θ(1,4)=2R2,θ(1,6)=F2Y1+β1NTB,θ(1,7)=β1NTC,θ(1,8)=6R1,θ(1,9)=6R2,θ(1,12)=σ22σ212X2,θ(1,13)=σ22σ212X2,θ(2,2)=σ21R1+σ22R2+(σ2σ1)2U2β2NT+(σ32σ31)236X2,θ(2,6)=β2NTB,θ(2,7)=β2NTC,θ(3,3)=Q14R14U,θ(3,5)=2U,θ(3,8)=6R1,θ(3,10)=6U,θ(4,4)=Q24R24U,θ(4,5)=2U,θ(4,9)=6R2,θ(4,11)=6U,θ(5,5)=8UK1Y2,θ(5,7)=K2Y2,θ(5,10)=6U,θ(5,11)=6U,θ(6,6)=Y1,θ(7,7)=Y2,θ(8,8)=12R1,θ(9,9)=12R2,θ(10,10)=12U,θ(11,11)=12U,θ(12,12)=X1X2,θ(12,13)=X2,θ(13,13)=X1X2,

    another terms are 0,

    then, the NNs (3.36) is exponentially stable.

    Proof. We choose the following Lyapunov–Krasovskii functional candidate for the system (3.36) as

    V(x(t),t)=8i=1Vi(x(t),t),

    where

    V1(x(t),t)=xT(t)Px(t),V2(x(t),t)=ttσ1xT(s)Q1x(s)ds,V3(x(t),t)=ttσ2xT(s)Q2x(s)ds,V4(x(t),t)=σ10σ1tt+s˙xT(τ)R1˙x(τ)dτds,V5(x(t),t)=σ20σ2tt+s˙xT(τ)R2˙x(τ)dτds,V6(x(t),t)=(σ2σ1)σ1σ2tt+s˙xT(τ)U˙x(τ)dτds,V7(x(t),t)=(σ22σ21)2σ1σ20βtt+λxT(s)X1x(s)dsdλdβ,V8(x(t),t)=(σ32σ31)6σ1σ20β0λtt+φ˙xT(s)X2˙x(s)dsdφdλdβ.

    By applying similar proof in Theorem 3.1, the system (3.36) is exponentially stable.

    Remark 3. Recently, the robust passivity problem of uncertain neural networks with interval discrete and distributed time-varying delays has been studied in [14]. Also, robust reliable H control problem of uncertain neural networks with mixed time delays has been discussed in [23]. However, the problem of mixed passive and H for uncertain neural networks with interval discrete and distributed time-varying delays has not been investigated yet. The results in this paper provide the sufficient conditions to assure that the uncertain neural network is exponentially stable with mixed passive and H index γ. The conditions are obtained by constructing a Lyapunov-Krasovskii functional consisting novel integral terms.

    Remark 4. It is well known that time delay is a normal phenomenon that appears in neural networks since the neural networks consist of a large number of neurons that connect with each other into a diversity of axon sizes and lengths. Practically time delay can occur in an irregular fashion such as sometimes the time-varying delays are not differentiable. So, in this work, the interval discrete and distributed time-varying delays do not necessitate being differentiable functions.

    Remark 5. It is well known that the H theory is very important in the control problem. Besides, the H approaches are used in control theory to synthesize controllers achieving stabilization with an H norm bound limited to disturbance reduction. The passivity theory is widely used in system synthesis and analysis, as the system with passivity performance can effectively reduce the impact of noise. In fact, the passivity system does not produce energy by itself, but it will use the system's energy. The main property of passivity is that can keep the system internally stable. By the above mentioned, the obtained results are based on mixed passivity and H problem for uncertain neural networks with mixed time-varying delays. In comparison between the design of mixed H/passive performance and a single H or passive controller, the control problem under mixed H/passive performance consideration is more general than a single H or passive controller for example, a simple actual mixed H and passive performance index is employed in handling with the event-triggered reliable control issue for the fuzzy Markov jump systems (FMJSs), which can achieve the H or passive event-triggered reliable control problem for FMJSs by turning some fixed parameters. Hence, this paper are more general and convenient than the existing individual passive and H problem.

    Remark 6. In this work, the Lyapunov-Krasovskii functional consisting single, double, triple, and quadruple integral terms, which full of the information of the delays σ1,σ2,δ1,δ2, and a state variable x(t). Furthermore, more information on activation functions has taken fully into the stability and performance analysis that is Fifi(xi(t))xi(t)F+i, Kiki(xi(tσ(t)))xi(tσ(t))K+i, and Hihi(xi(t))xi(t)H+i are addressed in the calculation. Hence, the construction and the technique for computation of the Lyapunov-Krasovskii functional are the main key to improve results of this work. In the proof of Theorems 3.1, 3.2, and Corollary 3.3, integral inequalities and convex combination technique are used to bound the derivative of Lyapunov-Krasovskii functional, which provide tighter than the inequalities in [30,31,32,38]. All of these lead to the improved results in our work as we can see the compared results with some exiting works in numerical examples. However, the complex computation of the Lyapunov-Krasovskii functional leads to the LMI derived in this work which contains many information of the system. It is feasible for NNs with large number of neurons which can be solved by using the Matlab LMI toolbox. Hence, for further work, it is interesting for researchers to improve the technique for a simple Lyapunov-Krasovskii functional and also achieve better results.

    In this section, we provided four numerical examples which are illustrated the effectiveness of the proposed results. Moreover, two numerical examples show less conservative results than others.

    Example 4.1. We consider the neural networks (3.36) with matrix parameters in [30]:

    A=[1001],B=[10.50.51.5],C=[20.50.52],F1=K1=[0000], and F2=K2=[0.2000.4].

    By taking parameters β1=β2=1 and solving Example 4.1 using LMIs in Corollary 3.3, we obtain maximum allowable values of σ2 for different σ1 without the upper bound of differentiable delay (μ) as shown in Table 1. Table 1 shows that the results derived in this paper are less conservative than the results in [30].

    Table 1.  The maximum allowable values of σ2 for different values of σ1 and μ.
    Methods σ1 μ=0.8 μ=0.9 Unknownμ
    [30] σ1=0.5 0.8262 0.8215 -
    Corollary 3.3 - - 0.9976
    [30] σ1=0.75 0.9669 0.9625 -
    Corollary 3.3 - - 1.1233
    [30] σ1=1 1.1152 1.1108 -
    Corollary 3.3 - - 1.2710

     | Show Table
    DownLoad: CSV

    Example 4.2. We consider the neural networks (3.36) with matrix parameters in [31,32,38]:

    A=[1.5000.7],B=[0.05030.04540.09870.2075],C=[0.23810.93200.03880.5062],F1=K1=[0000], and F2=K2=[0.15000.4].

    By taking parameters β1=β2=1 and solving Example 4.2 using LMIs in Corollary 3.3, we get maximum allowable values of σ2 for σ1=0 without the upper bound of differentiable delay (μ) as shown in Table 2. Table 2 illustrates that the results obtained in this paper are less conservative than the results in [31,32,38].

    Table 2.  The maximum allowable values of σ2 for σ1=0 and different values of μ.
    Methods μ=0.5 μ=0.55 Unknownμ
    [38] 3.0594 2.9814 -
    [31] 3.3377 3.2350 -
    [32] 3.4600 3.4100 -
    Corollary 3.3 - - 3.5814

     | Show Table
    DownLoad: CSV

    Example 4.3. We consider the neural networks (3.1) with σ1=0.5,σ2=1.75,δ1=0.2,δ2=1.0,υ=0.1,β1=0.9,β2=0.2,

    A=[1001],B=[0.20.10.50.1],C=[0.500.30.2],D=[0.150.100.3],C1=[0.5000.3],I=[1001],C2=C3=C4=0.1I,F1=K1=H1=0.4I,F2=K2=H2=0.4I,hi(xi)=tanh(xi), and fi(xi)=ki(xi)=0.2(xi+1xi1).

    LMIs of (3.2), (3.3) in Theorem 3.1 are solved, we obtain

    P=[3.65770.22000.22003.7479],Q1=[4.03380.04600.04603.9284],Q2=[4.16840.05800.05804.0614],R1=[0.17410.02950.02950.1987],R2=[0.02920.01210.01210.0394],U=[0.21210.01780.01780.2033],L=[3.64300.02770.02773.7766],X1=[2.84160.01970.01972.8239],X2=[0.04790.01930.01930.0642],N=[2.33630.24700.24322.3840],Z=[13.69050.93280.535812.9525],Y1=[3.9748003.9748],Y2=[0.4324000.4324],Y3=[4.6765004.6765].

    The state feedback control is obtained by

    U(t)=N1Zx(t)=[5.94790.98430.83165.5335]x(t),t0.

    The maximum allowable values of σ2 for different values of σ1 are shown in Table 3. Furthermore, we want to find the relation among the scalars σ2, υ, and γ. For three different values of υ, we set υ=0, υ=0.5, and υ=1, respectively, which means the passivity case, passivity and H case, and H case are studied, respectively. Moreover, we choose the values of σ2 from σ2=0.5 to σ2=2 and other parameters are fixed by σ1=0.2, δ1=0.2, δ2=0.8, β1=0.9, β2=0.2. By applying Theorem 3.1 and Matlab LMI toolbox to solve LMIs (3.2) and (3.3), we have the relation among the parameters σ2, υ, and γ, which is presented in Table 4. Figure 1 shows the response solution x(t) in Example 4.3 where ω(t)=0 and the initial condition ϕ(t)=[0.10.1]T. Figure 2 shows the response solution x(t) in Example 4.3 where ω(t) is Gaussian noise with mean 0 and variance 1 and the initial condition ϕ(t)=[0.10.1]T.

    Table 3.  The maximum allowable values of σ2 for different values of σ1 in Example 4.3.
    Method σ1=0 σ1=0.5 σ1=1 σ1=2 σ1=3
    Theorem 3.1 2.1176 2.3865 2.5354 3.3564 4.1253

     | Show Table
    DownLoad: CSV
    Table 4.  The minimum allowable values of γ for mixed passive and H analysis with different values of σ2 and υ in Example 4.3.
    γmin σ2=0.5 σ2=1 σ2=1.5 σ2=2
    υ=0 0.5672 0.6835 0.8135 0.9465
    υ=0.5 0.7752 0.9683 1.1035 1.2156
    υ=1 1.2331 1.4452 1.6862 1.7965

     | Show Table
    DownLoad: CSV
    Figure 1.  The trajectories of x1(t) and x2(t) with ω(t)=0 in Example 4.3.
    Figure 2.  The trajectories of x1(t) and x2(t) with Gaussian noise in Example 4.3.

    The numerical simulations are accomplished using the explicit Runge-Kutta-like method (dde45), extrapolation and interpolation by spline of the third order.

    Example 4.4. We consider the uncertain neural networks (2.2) with σ1=0.7,σ2=1.5,δ1=0.2,δ2=1,υ=0.1,β1=0.9,β2=0.2,

    A=[1001],B=[0.20.10.50.1],C=[0.500.30.2],D=[0.150.100.3],C1=[0.5000.3],I=[1001],C2=C3=C4=0.1I,F1=K1=H1=0.4I,F2=K2=H2=0.4I,J1=J2=J3=J4=0.2I,Σ1=Σ2=Σ3=Σ4=I,hi(xi)=tanh(xi), andfi(xi)=ki(xi)=0.2(xi+1xi1).

    LMIs of (3.32), (3.33) in Theorem 3.2 are solved, we obtain

    P=[2.85100.14780.14782.7756],Q1=[2.31540.14640.14642.0641],Q2=[2.35340.14020.14022.1063],R1=[0.05230.02000.02000.0680],R2=[0.01310.00770.00770.0192],U=[0.21040.00620.00620.1748],L=[2.26850.01080.01082.3401],X1=[1.33480.01250.01251.3274],X2=[0.02240.01030.01030.0305],N=[1.60530.04350.09781.4801],Z=[10.26770.56640.51379.2901],Y1=[3.0132003.0132],Y2=[0.4054000.4054],Y3=[3.4945003.4945],α1=1.9270,α2=0.6465,α3=1.2926,α4=1.9011,α5=0.0754,α6=0.6677,α7=1.2734,α8=1.8989.

    The state feedback control is obtained by

    U(t)=N1Zx(t)=[6.41700.52390.77126.3115]x(t),t0.

    The maximum allowable values of σ2 for different values of σ1 are shown in Table 5. Furthermore, we want to find the relation among the scalars σ2, υ, and γ. For three different values of υ, we set υ=0, υ=0.5, and υ=1, respectively, which means the passivity case, passivity and H case, and H case are considered, respectively. Moreover, we choose the values of σ2 from σ2=0.5 to σ2=2 and other parameters are fixed by σ1=0.2, δ1=0.2, δ2=0.8, β1=0.9, β2=0.2. By applying Theorem 3.2 and Matlab LMI toolbox to solve LMIs (3.32) and (3.33), we have the relation among the parameters σ2, υ, and γ, which is presented in Table 6. Figure 3 shows the response solution x(t) in Example 4.4 where ω(t)=0 and the initial condition ϕ(t)=[0.10.1]T. Figure 4 shows the response solution x(t) in Example 4.4 where ω(t) is Gaussian noise with mean 0 and variance 1 and the initial condition ϕ(t)=[0.10.1]T.

    Table 5.  The maximum allowable values of σ2 for different values of σ1 in Example 4.4.
    Method σ1=0 σ1=0.5 σ1=1 σ1=2 σ1=3
    Theorem 3.2. 1.8308 2.2056 2.4233 3.1232 3.8142

     | Show Table
    DownLoad: CSV
    Table 6.  The minimum allowable values of γ for mixed passive and H analysis with different values of σ2 and υ in Example 4.4.
    γmin σ2=0.5 σ2=1 σ2=1.5 σ2=2
    υ=0 0.6354 0.7534 0.8756 0.9869
    υ=0.5 0.8965 1.0231 1.2231 1.4365
    υ=1 1.6352 1.7563 1.8641 1.9634

     | Show Table
    DownLoad: CSV
    Figure 3.  The trajectories of x1(t) and x2(t) with ω(t)=0 in Example 4.4.
    Figure 4.  The trajectories of x1(t) and x2(t) with Gaussian noise in Example 4.4.

    Remark 7. In this work, we choose σ1,σ2,δ1,δ2,β1,β2,γ are real numbers that satisfy 0σ1σ(t)σ2, 0δ1δ1(t)δ2(t)δ2, and γ>0. In practice, the designing of these parameters can occur in an appropriate range. Furthermore, the suitable values of σ1,σ2,δ1,δ2,β1,β2 lead to the smallest γ for the mixed passive and H analysis.

    Remark 8. The stability criteria of Theorem 3.1 in the form LMIs (3.2) and (3.3) can be easily to examine by using LMI toolbox in MATLAB [39]. The improved stability criteria by using the Lyapunov-Krasovskii functional is based on LMIs and the dimension of the LMIs depends on the number of the neurons in neural networks. Thus, the computational burden problem goes up. This problem is the issue in studying needs of LMI optimization in applied mathematics and the optimization research. Hence, in the further, new techniques should be considered to reduce the conservativeness caused by the time-delays such as the delay-fractioning approach and so on.

    Remark 9. In the future work, it is very challenging to apply some lemmas or Lyapunov-Krasovskii functional used in this paper to apply into the quaternion-valued case to get improved stability conditions.

    The problem of mixed passive and H analysis for uncertain neural networks with the state feedback control is investigated in this paper. We obtain the new sufficient conditions to guarantee exponential stability with mixed passive and H performance for the uncertain neural networks by using a Lyapunov-Krasovskii functional consisting single, double, triple, and quadruple integral terms with a feedback controller. Furthermore, integral inequalities and convex combination technique are applied to achieve the less conservative results for a special case of neural networks with interval discrete time-varying delays. The new criteria are in terms of linear matrix inequalities (LMIs) that cover H, and passive performance by setting parameters in the general performance index. Finally, numerical examples have been given to show the effectiveness of the proposed results and improve over some existing results in the literature. In the future work, the derived results and methods in this paper are expected to be applied to other systems such as fuzzy control systems, complex dynamical networks, quaternion-valued neural networks and so on [16,40,41].

    The first author was supported by the Science Achievement Scholarship of Thailand (SAST). The second author was financially supported by Khon Kaen University. The third and the fourth authors were supported by the Program Management Unit for Human Resources & Institutional Development, Research and Innovation (grant number : B05F630095).

    The authors declare no conflict of interest.



    [1] L. O. Chua, L. Yang, Cellular neural networks: applications, IEEE Trans. Circuits Syst., 35 (1988), 1273–1290. doi: 10.1109/31.7601
    [2] M. A. Cohen, S. Grossberg, Absolute stability of global pattern formation and parallel memory storage by competitive neural networks, IEEE Trans. Syst. Man Cybern., 13 (1983), 815–826.
    [3] S. Haykin, Neural Networks, New Jersey: Englewood Cliffs, 1994.
    [4] H. Wang, Y. Yu, G. Wen, Stability analysis of fractional-order Hopfield neural networks with time delays, Neural Networks, 55 (2014), 98–109. doi: 10.1016/j.neunet.2014.03.012
    [5] H. Zhang, Z. Wang, New delay-dependent criterion for the stability of recurrent neural networks with time-varying delay, Sci. China Series F, 52 (2009), 942–948.
    [6] L. Wang, X. Zou, Harmless delays in Cohen-Grossberg neural networks, Phys. D, 170 (2002), 162–173. doi: 10.1016/S0167-2789(02)00544-4
    [7] A. Farnam, R. M. Esfanjani, A. Ahmadi, Delay-dependent criterion for exponential stability analysis of neural networks with time-varying delays, IFAC-PapersOnLine, 49 (2016), 130–135.
    [8] H. B. Zeng, Y. He, P. Shi, M. Wu, S. P. Xiao, Dissipativity analysis of neural networks with time-varying delays, Neurocomputing, 168 (2015), 741–746. doi: 10.1016/j.neucom.2015.05.050
    [9] Y. Liu, Z. Wang, X. Liu, Global exponential stability of generalized recurrent neural networks with discrete and distributed delays, Neural Networks, 19 (2006), 667–675. doi: 10.1016/j.neunet.2005.03.015
    [10] H. P. Kriegel, M. Pfeifle, Density-based clustering of uncertain data, In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2005,672–677.
    [11] G. Cormode, A. McGregor, Approximation algorithms for clustering uncertain data, In: Proceedings of the ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems, 2008,191–199.
    [12] C. C. Aggarwal, P. Yu, A framework for clustering uncertain data streams, In: Proceedings-International Conference on Data Engineering, 2008,150–159.
    [13] K. Subramanian, P. Muthukumar, S. Lakshmanan, Robust stabilization of uncertain neural networks with additive time-varying delays, IFAC-PapersOnLine, 49 (2016), 154–159.
    [14] H. B. Zeng, J. H. Park, H. Shen, Robust passivity analysis of neural networks with discrete and distributed delays, Neurocomputing, 149 (2015), 1092–1097. doi: 10.1016/j.neucom.2014.07.024
    [15] L. Wu, W. X. Zheng, Passivity-based sliding mode control of uncertain singular time-delay systems, Automatica, 45 (2009), 2120–2127. doi: 10.1016/j.automatica.2009.05.014
    [16] G. Calcev, R. Gorez, M. D. Neyer, Passivity approach to fuzzy control systems, Automatica, 34 (1998), 339–344. doi: 10.1016/S0005-1098(97)00202-1
    [17] H. Gao, T. Chen, T. Chai, Passivity and passification for networked control systems, SIAM J. Control Optim., 46 (2007), 1299–1322. doi: 10.1137/060655110
    [18] L. Xie, M. Fu, H. Li, Passivity analysis and passification for uncertain signal processing systems, IEEE Trans. Signal Process., 46 (1998), 2394–2403. doi: 10.1109/78.709527
    [19] H. Li, J. Lam, K. C. Cheung, Passivity criteria for continuous-time neural networks with mixed time-varying delays, Appl. Math. Comput., 218 (2012), 11062–11074.
    [20] M. V. Thuan, H. Trinh, L. V. Hien, New inequality-based approach to passivity analysis of neural networks with interval time-varying delay, Neurocomputing, 194 (2016), 301–307. doi: 10.1016/j.neucom.2016.02.051
    [21] S. Xu, W. X. Zheng, Y. Zou, Passivity analysis of neural networks with time-varying delays, IEEE T. Circuits II, 56 (2009), 325–329.
    [22] N. Yotha, T. Botmart, K. Mukdasai, W. Weera, Improved delay-dependent approach to passivity analysis for uncertain neural networks with discrete interval and distributed time-varying delays, Vietnam J. Math., 45 (2017), 721–736. doi: 10.1007/s10013-017-0243-1
    [23] Y. Du, X. Liu, S. Zhong, Robust reliable H control for neural networks with mixed time delays, Chaos Soliton. Fract., 91 (2016), 1–8. doi: 10.1016/j.chaos.2016.04.009
    [24] M. Syed Ali, R. Saravanakumar, S. Arik, Novel H state estimation of static neural networks with interval time-varying delays via augmented Lyapunov-Krasovskii functional, Neurocomputing, 171 (2016), 949–954. doi: 10.1016/j.neucom.2015.07.038
    [25] K. Mathiyalagan, J. H. Park, R. Sakthivel, S. M. Anthoni, Robust mixed H and passive filtering for networked markov jump systems with impulses, Signal Process., 101 (2014), 162–173. doi: 10.1016/j.sigpro.2014.02.007
    [26] M. Meisami-Azad, J. Mohammadpour, K. M. Grigoriadis, Dissipative analysis and control of state-space symmetric systems, Automatica, 45 (2009), 1574–1579. doi: 10.1016/j.automatica.2009.02.015
    [27] L. Su, H. Shen, Mixed H/passive synchronization for complex dynamical networks with sampled-data control, Appl. Math. Comput., 259 (2015), 931–942.
    [28] J. Wang, L. Su, H. Shen, Z. G. Wu, J. H. Park, Mixed H/passive sampled-data synchronization control of complex dynamical networks with distributed coupling delay, J. Franklin Inst., 354 (2017), 1302–1320. doi: 10.1016/j.jfranklin.2016.11.035
    [29] R. Sakthivel, R. Anbuvithya, K. Mathiyalagan, P. Prakash, Combined H and passivity state estimation of memristive neural networks with random gain fluctuations, Neurocomputing, 168 (2015), 1111–1120. doi: 10.1016/j.neucom.2015.05.012
    [30] J. Qiu, H. Yang, J. Zhang, Z. Gao, New robust stability criteria for uncertain neural networks with interval time-varying delays, Chaos Soliton. Fract., 39 (2009), 579–585. doi: 10.1016/j.chaos.2007.01.087
    [31] J. Sun, G. P. Liu, J. Chen, D. Rees, Improved stability criteria for neural networks with time-varying delay, Phys. Lett. A, 373 (2009), 342–348. doi: 10.1016/j.physleta.2008.11.048
    [32] J. Tian, X. Xie, New asymptotic stability criteria for neural networks with time-varying delay, Phys. Lett. A, 374 (2010), 938–943. doi: 10.1016/j.physleta.2009.12.020
    [33] A. Farnam, R. M. Esfanjani, Improved linear matrix inequality approach to stability analysis of linear systems with interval time-varying delays, J. Comput. Appl. Math., 294 (2016), 49–56. doi: 10.1016/j.cam.2015.07.031
    [34] J. An, Z. Li, X. Wang, A novel approach to delay-fractional dependent stability criterion for linear systems with interval delay, ISA Trans., 53 (2014), 210–219. doi: 10.1016/j.isatra.2013.11.020
    [35] A. Seuret, F. Gouaisbaut, Jensen's and Wirtinger's inequalities for time-delay systems, In: Proceedings of the 11th IFAC Workshop on Time-Delay Systems, 2013,343–348.
    [36] Z. Wang, Y. Liu, K. Fraser, X. Liu, Stochastic stability of uncertain Hopfield neural networks with discrete and distributed delays, Phys. Lett. A, 354 (2006), 288–297. doi: 10.1016/j.physleta.2006.01.061
    [37] B. Boyd, L. E. Ghoui, E. Feron, V. Balakrishnan, Linear matrix inequalities in system and control theory, Philadephia: SIAM, 1994.
    [38] Y. He, G. P. Liu, D. Rees, M. Wu, Stability analysis for neural networks with time-varying interval delay, IEEE Trans. Neural Netw., 18 (2007), 1850–1854. doi: 10.1109/TNN.2007.903147
    [39] G. Balas, R. Chaing, A. Packard, M. Safovov, Robust control toolbox user's guide, Natick: The MathWorks, 2010.
    [40] P. Niamsup, T. Botmart, W. Weera, Modified function projective synchronization of complex dynamical networks with mixed time-varying and asymmetric coupling delays via new hybrid pinning adaptive control, Adv. Differ. Equ., 2017 (2017), 1–31. doi: 10.1186/s13662-016-1057-2
    [41] H. Shu, Q. Song, Y. Liu, Z. Zhao, F. E. Alsaadi, Global μ-stability of quaternion-valued neural networks with non-differentiable time-varying delays, Neurocomputing, 247 (2017), 202–212. doi: 10.1016/j.neucom.2017.03.052
  • This article has been cited by:

    1. Thanasak Mouktonglang, Kanyuta Poochinapan, Suriyon Yimnet, Robust Finite-Time Control of Discrete-Time Switched Positive Time-Varying Delay Systems with Exogenous Disturbance and Their Application, 2022, 14, 2073-8994, 735, 10.3390/sym14040735
    2. Chao Ge, Yan Wang, Zhiwei Zhao, Yajuan Liu, Changchun Hua, Non-fragile H∞ control for event-triggered networked control systems with probabilistic time-varying delay, 2022, 0020-7179, 1, 10.1080/00207179.2022.2078426
    3. Hyung Tae Choi, Jung Hoon Kim, An $ L_{\infty} $ performance control for time-delay systems with time-varying delays: delay-independent approach via ellipsoidal $ \mathcal{D} $-invariance, 2024, 9, 2473-6988, 30384, 10.3934/math.20241466
    4. Hyung Tae Choi, Hae Yeon Park, Jung Hoon Kim, Output-based event-triggered control for discrete-time systems with three types of performance analysis, 2023, 8, 2473-6988, 17091, 10.3934/math.2023873
    5. Yue Xiao, Zhenzhen Zhang, Yixian Chen, Hao Chen, Yang Liu, Zhuoyin Li, A hybrid trigger strategy based mixed impulsive control scheme on uncertain local field neural networks with leakage delay, 2023, 139, 00190578, 205, 10.1016/j.isatra.2023.04.007
    6. Oe Ryung Kang, Jung Hoon Kim, The $ l_\infty $-induced norm of multivariable discrete-time linear systems: Upper and lower bounds with convergence rate analysis, 2023, 8, 2473-6988, 29140, 10.3934/math.20231492
    7. O. R. Kang, J. H. Kim, 2024, The l∞-induced Norm of Multivariable Discrete-time Linear Systems: Upper and Lower Bounds with Convergence Rate Analysis : A Dissemination Version, 978-89-93215-38-0, 892, 10.23919/ICCAS63016.2024.10773061
  • Reader Comments
  • © 2021 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1585) PDF downloads(52) Cited by(7)

Figures and Tables

Figures(4)  /  Tables(6)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog