h1 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 12 | 15 | 20 |
Theorem2 | 65 | 64 | 64 | 63 | 62 | 62 | 60 | 58 | 56 | 54 |
The summation inequality is essential in creating delay-dependent criteria for discrete-time systems with time-varying delays and developing other delay-dependent standards. This paper uses our rebuilt summation inequality to investigate the robust stability analysis issue for discrete-time neural networks that incorporate interval time-varying leakage and discrete and distributed delays. It is a novelty of this study to consider a new inequality, which makes it less conservative than the well-known Jensen inequality, and use it in the context of discrete-time delay systems. Further stability and passivity criteria are obtained in terms of linear matrix inequalities (LMIs) using the Lyapunov-Krasovskii stability theory, coefficient matrix decomposition technique, mobilization of zero equation, mixed model transformation, and reciprocally convex combination. With the assistance of the LMI Control toolbox in Matlab, numerical examples are provided to demonstrate the validity and efficiency of the theoretical findings of this research.
Citation: Jenjira Thipcha, Presarin Tangsiridamrong, Thongchai Botmart, Boonyachat Meesuptong, M. Syed Ali, Pantiwa Srisilp, Kanit Mukdasai. Robust stability and passivity analysis for discrete-time neural networks with mixed time-varying delays via a new summation inequality[J]. AIMS Mathematics, 2023, 8(2): 4973-5006. doi: 10.3934/math.2023249
[1] | Sunisa Luemsai, Thongchai Botmart, Wajaree Weera, Suphachai Charoensin . Improved results on mixed passive and $ H_{\infty} $ performance for uncertain neural networks with mixed interval time-varying delays via feedback control. AIMS Mathematics, 2021, 6(3): 2653-2679. doi: 10.3934/math.2021161 |
[2] | Thongchai Botmart, Sorphorn Noun, Kanit Mukdasai, Wajaree Weera, Narongsak Yotha . Robust passivity analysis of mixed delayed neural networks with interval nondifferentiable time-varying delay based on multiple integral approach. AIMS Mathematics, 2021, 6(3): 2778-2795. doi: 10.3934/math.2021170 |
[3] | Chantapish Zamart, Thongchai Botmart, Wajaree Weera, Prem Junsawang . Finite-time decentralized event-triggered feedback control for generalized neural networks with mixed interval time-varying delays and cyber-attacks. AIMS Mathematics, 2023, 8(9): 22274-22300. doi: 10.3934/math.20231136 |
[4] | Patarawadee Prasertsang, Thongchai Botmart . Improvement of finite-time stability for delayed neural networks via a new Lyapunov-Krasovskii functional. AIMS Mathematics, 2021, 6(1): 998-1023. doi: 10.3934/math.2021060 |
[5] | Nayika Samorn, Kanit Mukdasai, Issaraporn Khonchaiyaphum . Analysis of finite-time stability in genetic regulatory networks with interval time-varying delays and leakage delay effects. AIMS Mathematics, 2024, 9(9): 25028-25048. doi: 10.3934/math.20241220 |
[6] | Yanshou Dong, Junfang Zhao, Xu Miao, Ming Kang . Piecewise pseudo almost periodic solutions of interval general BAM neural networks with mixed time-varying delays and impulsive perturbations. AIMS Mathematics, 2023, 8(9): 21828-21855. doi: 10.3934/math.20231113 |
[7] | Biwen Li, Yibo Sun . Stability analysis of Cohen-Grossberg neural networks with time-varying delay by flexible terminal interpolation method. AIMS Mathematics, 2023, 8(8): 17744-17764. doi: 10.3934/math.2023906 |
[8] | Qinghua Zhou, Li Wan, Hongbo Fu, Qunjiao Zhang . Exponential stability of stochastic Hopfield neural network with mixed multiple delays. AIMS Mathematics, 2021, 6(4): 4142-4155. doi: 10.3934/math.2021245 |
[9] | Boonyachat Meesuptong, Peerapongpat Singkibud, Pantiwa Srisilp, Kanit Mukdasai . New delay-range-dependent exponential stability criterion and $ H_\infty $ performance for neutral-type nonlinear system with mixed time-varying delays. AIMS Mathematics, 2023, 8(1): 691-712. doi: 10.3934/math.2023033 |
[10] | Huahai Qiu, Li Wan, Zhigang Zhou, Qunjiao Zhang, Qinghua Zhou . Global exponential periodicity of nonlinear neural networks with multiple time-varying delays. AIMS Mathematics, 2023, 8(5): 12472-12485. doi: 10.3934/math.2023626 |
The summation inequality is essential in creating delay-dependent criteria for discrete-time systems with time-varying delays and developing other delay-dependent standards. This paper uses our rebuilt summation inequality to investigate the robust stability analysis issue for discrete-time neural networks that incorporate interval time-varying leakage and discrete and distributed delays. It is a novelty of this study to consider a new inequality, which makes it less conservative than the well-known Jensen inequality, and use it in the context of discrete-time delay systems. Further stability and passivity criteria are obtained in terms of linear matrix inequalities (LMIs) using the Lyapunov-Krasovskii stability theory, coefficient matrix decomposition technique, mobilization of zero equation, mixed model transformation, and reciprocally convex combination. With the assistance of the LMI Control toolbox in Matlab, numerical examples are provided to demonstrate the validity and efficiency of the theoretical findings of this research.
Nature's vast majority of systems-including the biological nervous system-are dynamic in that external circumstances influence it to have internal memory and behave in a specific manner. The idea of activity development may describe that through time. Time delays, both constant and time-varying, generally exist in dynamic systems, such as chemical process control systems, man-manufacturing systems, cooling systems, hydraulic systems, irrigation channels, metallurgical processes, robotics, and neural networks [1,6,8,10,11,13,14,19,22,23,24,30,31,32,35,36,39,45].
The nonlinear ordinary difference equation in discrete-time state-space form may be used to explain a general class of discrete-time systems. For discrete-time systems, it is possible to employ the nonlinear ordinary difference equation in the discrete-time state-space form to describe them. x(k+1)=f(x(k),u(k)),y(k)=h(x(k),u(k)) where x(k)∈Rn is the internal state vector, u(k)∈Rm is the control input, and y(k)∈Rp is the system output. These equations may be obtained from analyzing the dynamical system or process under study. In contrast, others may be derived from discretized or sampled continuous-time dynamics of a nonlinear system under investigation.
The study of the stability analysis of discrete-time systems with time-varying delays has emerged as a popular subject in the area of control theory during the last few years [7,12,15,17,21,25,26,27,29,33,37,38,42,44,45]. The stability of neural networks is a precondition for solving many engineering issues; it has garnered considerable attention in recent years, and many elegant solutions have been published. [11,19,22,23,34,35,39,45]. When implementing continuous-time neural networks for computer simulation, for computational or experimental reasons, it is necessary to construct a discrete-time system that is analogous to the continuous-time neural networks that should not be overlooked. [24] points out that discretization cannot consistently maintain the dynamics of the continuous-time counterpart, even for a short sample interval. As a result, it is critical to understand the dynamics of discrete-time neural networks.
Human brain activity, particularly that of neural networks, may be viewed as a very sophisticated parallel computer that is more efficient than any presently existing computer when neural networks are straightforwardly implemented in computers. In the case of neural networks, one of the most crucial characteristics is a temporal delay in the leakage term. When it comes to neural networks, the time delay in the leakage term has a significant impact on their dynamics since the system becomes unstable when there is a delay in reacting to a negative outcome; this causes the system to become unstable [5,6,15,18,20]. [6] investigated neural networks with a time delay in the leakage term and their findings on the presence and uniqueness of the equilibrium, independent of the time delay and starting circumstances, to determine whether the equilibrium exists. This means that the existence and uniqueness of the equilibrium point are unaffected by the delay in the leakage term. As a result of its importance as a helpful tool for the stability analysis of both linear and nonlinear systems, especially high-order systems, passivity theory, initially proposed in circuit analysis, has drawn a lot of attention and has been extensively investigated. Systems with passive qualities maintain their internal stability. The passivity theory has been widely used in a variety of fields, including signal processing [43], fuzzy control [16], sliding mode control [41], and networked control [4].
The problems of novel delay-range-dependent robust asymptotic stability and passivity criteria for uncertain discrete-time neural networks with interval discrete and distributed time-varying delays are introduced in this paper, which is motivated by earlier discussions. A novel delay-range-dependent stability and passivity analysis is also investigated for uncertain discrete-time neural networks with interval discrete, distributed, and leakage time-varying delays. New delay-range-dependent robust asymptotic stability and passivity criteria in terms of linear matrix inequalities (LMIs) for considered systems are obtained using a class of novel augmented Lyapunov-Krasovskii functionals (LKFs), model transformation, coefficient matrix decomposition technique, reciprocally convex combination, Leibniz-Newton formula, and use of zero equation. Also presented is an improvement in the stability and passivity criterion for discrete-time neural networks with interval time-varying delay dependent on the delay range. Theory may be shown using numerical examples, indicating that it's more effective while being less conservative. The main contributions and highlights of this paper are summarized in the following key points.
(1) The rebuilt summation inequality is used for the robust stability analysis issue for discrete-time neural networks that incorporate interval time-varying leakage, discrete and distributed delays for developing the delay-dependent criteria.
(2) We apply new inequalities to improve the stability criteria, such as Jensen inequality, coefficient matrix decomposition technique, utilization of zero equation, mixed model transformation, and reciprocally convex combination. Using the above new LKFs and the lemmas leads to less conservatism of the obtained results than in published literature, as presented via numerical examples.
(3) We present numerical examples to demonstrate the feasibility and effectiveness of the theorem.
Notations: Throughout the paper Rn denotes the n-dimensional Euclidean space; Z+={0,1,2,3,...}; N = {1, 2, 3, ...}; R(n×m) denotes the set of n×m-real matrices; AT denotes the transpose of the matrix A; A is symmetric if A=AT; In is the n×n-identity matrix; matrix A is called semi-positive definite (A≥0) if xTAx≥0, for all x∈Rn; A is positive definite (A>0) if xTAx>0, for all x≠0; A>B means A−B>0(B−A<0); A≥B means A−B≥0(B−A≤0); ρ=max{τ2,h2,M}; ∗ denotes symmetric terms in a symmetric matrix; [⋆] denote the right-side vector in a symmetric quadratic form.
Consider the following uncertain discrete-time neural network with interval time-varying leakage, discrete and distributed delays, as shown in the following system:
{x(k+1)=(A+ΔA(k))x(k−τ(k))+(B+ΔB(k))f(x(k))+(C+ΔC(k))g(x(k−h(k)))+(D+ΔD(k))M∑i=1δ(i)x(k−i)+w(k),k∈Z+,z(k)=Azx(k−τ(k))+Bzf(x(k))+Czg(x(k−h(k)))+DzM∑i=1δ(i)x(k−i),k∈N,x(s)=ϕ(s),s=−ρ,−ρ+1,…,0, | (2.1) |
where x(k)=[x1(k),x2(k),…,xn(k)]T∈Rn is the system state vector, z(k) is the output vector of neuron network, w(k) is the exogenous disturbance input vector, A=diag{a1,a2,...,an} is the state feedback coefficient matrix with |ai|<1, matrices B,C,D,Az,Bz,Cz and Dz are known real constant matrices with appropriate dimensions, M∈N, ϕ(s) is the initial condition of system (2.1), τ(k) represents the leakage delay satisfying
0<τ1≤τ(k)≤τ2, | (2.2) |
where τ1 and τ2 denote the lower and upper bounds of τ(k). The time-varying delay h(k) satisfies
0<h1≤h(k)≤h2, | (2.3) |
where h1 and h2 are known positive integers. There exists a constant κ>0 such that function δ(i) satisfies the following convergence condition
M∑i=1δ(i)=κ<+∞. | (2.4) |
ΔA(k), ΔB(k),ΔC(k) and ΔD(k) represent the time-varying parameter uncertainties, and are assumed to satisfy the following linear fractional form
[ΔA(k)ΔB(k)ΔC(k)ΔD(k)]=ΓΔ(k)[H1H2H3H4], | (2.5) |
where Γ,H1,H2,H3 and H4 are known real constant matrices with appropriate dimensions. The uncertain matrix Δ(k) satisfies
Δ(k)=[I−Ω(k)E]−1Ω(k), | (2.6) |
and is said to be admissible, where E is a known matrix satisfying
I−EET>0, | (2.7) |
and Ω(k) is an unknown time-varying matrix function satisfying
ΩT(k)Ω(k)≤I. | (2.8) |
Assumption 1. For i∈{1,2,…,n}, the neuron activation functions fi(⋅), gi(⋅) in system (2.1) are continuous and bounded.
Assumption 2. For any s1,s2∈R,s1≠s2, the continuous and bounded activation functions fi(⋅) and gi(⋅) satisfy
F−i≤fi(s1)−fi(s2)s1−s2≤F+i,i=1,2,…,n,G−i≤gi(s1)−gi(s2)s1−s2≤G+i,i=1,2,…,n, |
and fi(0)=gi(0)=0, where F−i,F+i,G−i, and G+i are known real constants.
Definition 1. [28] The discrete-time system (2.1), with ω(k)=0, is said to be robust asymptotically stable if there exists a positive definite scalar function V(x(k)):Z+×Rn↦R such that
ΔV(x(k))=V(x(k+1))−V(x(k))<0, |
along the solution of the system (2.1) for all uncertainties.
Definition 2. [36] The discrete-time system (2.1), with ω(k)=0 and Ω(k)=0, is said to be asymptotically stable if there exists a positive definite scalar function V(x(k)):Z+×Rn↦R+ such that
ΔV(x(k))=V(x(k+1))−V(x(k))<0, |
along the solution of the system (2.1).
Definition 3. [32] The system (2.1) is called passive if there exists a scalar γ≥0 such that
2k∑i=0zT(i)w(i)≥−γk∑i=0wT(i)w(i), |
for all k∈Z+ and for all solution of (2.1) with x(0)=0 holds.
Lemma 1. [17] Suppose that Δ(k) is given by (2.6)–(2.8). Let M,S and N be real constant matrices of appropriate dimension with M=MT. Then, the inequality
M+SΔ(k)N+NTΔ(k)TST<0, |
holds if and only if, for any positive real constant δ,
[MSδNT∗−δIδNT∗∗−δI]<0. |
Lemma 2. [27] Let γ1,γ2,…,γN:Rm↦R have positive values in an open subset D of Rm. Then, the reciprocally convex combination of γi over D satisfies
min{αi∣αi>0,∑iαi=1}∑i1αiγi(k)=∑iγi(k)+maxϵi,j(k)∑i≠jϵi,j(k), |
subject to
ϵi,j:Rm↦R,ϵj,i(k)Δ=ϵi,j(k),[γi(k)ϵi,j(k)ϵi,j(k)γj(k)]≥0. |
Lemma 3. The following inequality holds for any α∈Rn, β∈Rm, Ξ,Y∈Rn×m, X∈Rn×n, and Z∈Rm×m,
−2αTΞβ≤[αβ]T[XY−Ξ∗Z][αβ], |
where [XY∗Z]≥0.
Lemma 4. [8] For any positive real constant matrix M∈Rn×n, M=MT, two constants h2≥h1>0, such that the following inequalities hold:
(1) [h1∑i=1x(i)]TM[h1∑i=1x(i)]≤h1h1∑i=1xT(i)Mx(i),
(2) [k−h1−1∑i=k−h2k−h1−1∑j=ix(j)]TM[k−h1−1∑i=k−h2k−h1−1∑j=ix(j)]≤(h2−h1)(h2−h1+1)2k−h1−1∑i=k−h2k−h1−1∑j=ixT(j)Mx(j),
(3) [−h1−1∑i=h2k−1∑j=k+ix(j)]TM[−h1−1∑i=−h2k−1∑j=k+ix(j)]≤(h2−h1)(h2+h1+1)2−h1−1∑i=−h2k−1∑j=k+ixT(j)Mx(j).
Lemma 5. [25] For a given positive-definite n×n-matrix R, three given non-negative integers α,β,k satisfying α<β≤k, a vector function x(⋅)∈Rn and denoting Δx(k)=x(k+1)−x(k), we have
k−α−1∑i=k−βΔxT(i)RΔx(i)≥1β−α(Θ0α,β)TRΘ0α,β+3β−α(Θ1α,β)TRΘ1α,β+5β−α(Θ2α,β)TRΘ2α,β, |
where
Θ0α,β=x(k−α)−x(k−β),Θ1α,β=x(k−α)+x(k−β)−2β−α+1k−α∑i=k−βx(i),Θ2α,β=x(k−α)−x(k−β)+6β−α+1k−α∑i=k−βx(i)−12(β−α+2)(β−α+1)k−α∑i=k−βx(i)−α∑j=−βk−α∑i=k+sx(i). |
Lemma 6. Let Δx(k)∈Rn be a vector-valued function with first-order forward difference entries. Then, the following integral inequality holds for any constant matrices X,Mi∈Rn×n, i=1,2,…,5 and h(k) is discrete interval time-varying delays with 0≤h1≤h(k)≤h2,
−k−h1−1∑i=k−h2ΔxT(i)XΔx(i)≤[x(k−h1)x(k−h(k))x(k−h2)]T[M1+MT1−MT1+M20∗M1+MT1−M2−MT2−MT1+M2∗∗−M2−MT2][x(k−h1)x(k−h(k))x(k−h2)]+[h2−h1][x(k−h1)x(k−h(k))x(k−h2)]T[M3M40∗M3+M5M4∗∗M5][x(k−h1)x(k−h(k))x(k−h2)], | (2.9) |
where
[XM1M2∗M3M4∗∗M5]≥0. |
Proof. From the discrete analog of the Newton-Leibniz formula, we obtain
0=x(k−h1)−x(k−h(k))−k−h1−1∑i=k−h(k)Δx(i), | (2.10) |
0=x(k−h(k))−x(k−h2)−k−h(k)−1∑i=k−h2Δx(i). | (2.11) |
For any constant matrices Ξ1,Ξ2∈Rn×n with zero equation (2.10),
0=2[xT(k−h1)−xT(k−h(k))−k−h1−1∑i=k−h(k)ΔxT(i)][Ξ1x(k−h1)+Ξ2x(k−h(k))]=[x(k−h1)x(k−h(k))]T[Ξ1+ΞT1−ΞT1+Ξ2∗−Ξ2−ΞT2][x(k−h1)x(k−h(k))]−2k−h1−1∑i=k−h(k)ΔxT(i)[Ξ1Ξ2][x(k−h1)x(k−h(k))]. | (2.12) |
Using Lemma 3 with α=Δx(i), β=[x(k−h1)x(k−h(k))], Y=[M1M2] and Z=[M3M4∗M5], we get
![]() |
(2.13) |
Substituting (2.13) into (2.12), then we obtain
−k−h1−1∑i=k−h(k)ΔxT(i)XΔx(i)≤[x(k−h1)x(k−h(k))]T[Ξ1+ΞT1−ΞT1+Ξ2∗−Ξ2−ΞT2][x(k−h1)x(k−h(k))]+[x(k−h1)x(k−h(k))]T([M1+MT1−MT1+M2∗−M2−MT2]−[Ξ1+ΞT1−ΞT1+Ξ2∗−Ξ2−ΞT2])[x(k−h1)x(k−h(k))]+[h2−h1][x(k−h1)x(k−h(k))]T[M3M4∗M5][x(k−h1)x(k−h(k))]=[x(k−h1)x(k−h(k))]T[M1+MT1−MT1+M2∗−M2−MT2][x(k−h1)x(k−h(k))]+[h2−h1][x(k−h1)x(k−h(k))]T[M3M4∗M5][x(k−h1)x(k−h(k))]. | (2.14) |
By Eq (2.11), the following equation is true for any constant matrices Ξ1,Ξ2∈Rn×n
0=2[xT(k−h(k))−xT(k−h2)−k−h(k)−1∑i=k−h2ΔxT(i)][Ξ1x(k−h(k))+Ξ2x(k−h2)]. |
Similarly, we have
−k−h(k)−1∑i=k−h2ΔxT(i)XΔx(i)≤[x(k−h(k))x(k−h2)]T[M1+MT1−MT1+M2∗−M2−MT2][x(k−h(k))x(k−h2)]+[h2−h1][x(k−h(k))x(k−h2)]T[M3M4∗M5][x(k−h(k))x(k−h2)]. | (2.15) |
Finally, considering (2.14) and (2.15) together, then the summation inequality (2.9) is established. This brings the proof to a conclusion.
This subsection presents a stability analysis of system (2.1) with ω(k)=0. The LMI based conditions will be derived using Lyapunov technique.
Consider the following neural network with interval leakage delay of the form
{x(k+1)=(A+ΔA(k))x(k−τ(k))+(B+ΔB(k))f(x(k))+(C+ΔC(k))g(x(k−h(k)))+(D+ΔD(k))M∑i=1δ(i)x(k−i),k∈Z+x(s)=ϕ(s),s∈{−ρ,−ρ+1,…,−1,0,}. | (3.1) |
To be more specific, we will present the notations that will be used later
Π=[Πi,j]21×21, | (3.2) |
where Πi,j=ΠTj,i, i,j=1,2,3,…,21,
Π1,1=P1J1+P1J2+JT1P1+JT2P1+QT1(A1−I)+(AT1−I)Q1+(h12+1)P2+(τ12+1)P3−9R1−9R3+h1(L1+LT1)+h21L3+h2(M1+MT1)+(h22)M3+τ1(S1+ST1)+(τ21)S3+τ2(T1+TT1)+(τ22)T3+ξP6−F1Λ1,Π1,2=P1+JT1P1+JT2P1−QT1+(A1−I)Q2,Π1,3=−P1J1+h1(−LT1+L2)+(h21)L4+h2(−MT1+M2)+(h22)M4,Π1,4=3R1,Π1,6=−P1J1,Π1,7=−24h1+1R1,Π1,8=−60(h1+2)(h1+1)R1,Π1,11=−P1J2+QT1A2+(A1−I)Q3+τ1(−ST1+S2)+(τ21)S4+(τ22)T4+τ1(−TT1+T2),Π1,12=3R3,Π1,14=−P1J2−QT1A1+(A1−I)Q4,Π1,15=−24τ1−1R3,Π1,16=60(τ1+2)(τ1+1)R3,Π1,19=QT1B+(A1−I)Q5+F2Λ1,Π1,20=QT1C+(A1−I)Q6,Π1,21=QT1D+(A1−I)Q7,Π2,2=P1−QT2−Q2+(h22)P4+(τ22)P5+(h21)R1+(h212)R2+(h21)Z1+(h22)Z2+(h212)Z3+(τ21)R3+(τ212)R4+(τ21)Z4+(τ22)Z5+(τ212)Z6,Π2,3=−P1J1,Π2,6=−P1J1, Π2,11=−P1J2+QT2A2−Q3,Π2,14=−P1J2−QT2A1−Q4,Π2,19=QT2B−Q5,Π2,20=QT2C−Q6,Π2,21=QT2D−Q7,Π3,3=−G1Λ2+h1(L1+LT1−L2−LT2)+(h21)(L3+L5)+(h22)(M3+M5)+h2(M1+MT1−M2−MT2)+h12(N1+NT1−N2−NT2)+(h212)(N3+N5),Π3,4=h1(−LT1+L2)+(h21)L4+h12(−N1+NT2)+(h212)NT4,Π3,5=h2(−MT1+M2)+(h22)M4+h12(−NT1+N2)+(h212)N4,Π3,20=G2Λ2,Π4,4=−9R1−9R2+h1(−L2−LT2)+(h21)L5+h12(N1+NT1)+(h212)N3,Π4,5=3R2,Π4,6=36h1+1R1,Π4,7=−60(h1+2)(h1+1)R1,Π4,8=−24h12+1R2,Π4,9=60(h12+2)(h12+1)R2,Π5,5=−P2−9R2+h2(−M2−MT2)+(h22)M5+h12(−N2−NT2)+(h212)N5,Π5,9=36h12+1R2,Π5,10=−60(h12+2)(h12+1)R2,Π6,6=−P4,Π7,7=−192(h1+1)2R1,Π7,8=360(h1+2)(h1+1)2R1,Π8,8=−720(h1+2)2(h1+1)2R1,Π9,9=−192(h12+1)2R2,Π9,10=360(h12+2)(h12+1)2R2,Π10,10=−720(h12+2)2(h12+1)2R2,Π11,11=QT3A2+A2Q3+τ1(S1+ST1−S2−ST2)+(τ21)(S3+S5)+τ2(T1+TT1−T2−TT2)+(τ22)(T3+T5)+(τ212)(U3+U5)+τ12(U1+UT1−U2−UT2),Π11,12=τ1(−ST1+S2)+(τ21)S4+τ12(−U1+UT2)+(τ212)UT4,Π11,13=τ2(−TT1+T2)(τ22)T4+τ12(−UT1+U2)+(τ212)U4,Π11,14=−QT3A1+A2Q4,Π11,19=QT3B+A2Q5,Π11,20=QT3C+A2Q6,Π11,21=QT3D+A2Q7,Π12,12=−9R3−9R4+τ1(−S2−ST2)+(τ21)S5+τ12(−U1−UT1)+(τ212)U3,Π12,13=3R4,Π12,15=36(τ1+1)R3,Π12,16=−60(τ1+2)(τ1+1))R3,Π12,17=−24τ12+1R4,Π12,18=60(τ12+2)(τ12+1)R4,Π13,13=−P3−9R4+τ2(−T2−TT2)+(τ22)T5+τ12(−U2−UT2)+(τ212)U5,Π13,17=36(τ12+1)(τ12+1)R4,Π13,18=−60(τ12+2)(τ12+1)R4,Π14,14=−QT4A1−A1Q4−P5,Π14,19=QT4B−A1Q5,Π14,20=QT4C−A1Q6,Π14,21=QT4D−A1Q7,Π15,15=−192(τ1+1)2R3,Π15,16=360(τ1+2)(τ1+1)2R3,Π16,16=−720(τ1+2)2(τ1+1)2R3,Π17,17=−192(τ12+1)2R4,Π17,18=360(τ12+2)(τ12+1)2R4,Π18,18=−720(τ12+2)2(τ12+1)2R4,Π19,19=QT5B+BQ5−Λ1,Π19,20=QT5C+BQ6,Π19,21=QT5D+BQ7,Π20,20=QT6C+CQ6−Λ2,Π20,21=QT5D+BQ7,Π21,21=QT7D+DQ7−ξ−1P6, |
and others are equal to zero.
First of all, we examine the discrete-time neural network of the type with interval time-varying discrete, leakage, and distributed delays of the form
{x(k+1)=Ax(k−τ(k))+Bf(x(k))+Cg(x(k−h(k)))+DM∑i=1δ(i)x(k−i),k∈Z+,x(s)=ϕ(s),s∈{−ρ,−ρ+1,…,−1,0,}. | (3.3) |
Theorem 1. The system (3.3) is asymptotically stable, if there exist positive definite symmetric matrices Pi,Qj,Rk,Zi, i=1,2,3,…,6,j=1,2,3,…,7, k=1,2,3,4, and any appropriate dimensional matrices Λ1,Λ2, satisfying the following LMIs
Π<0, | (3.4) |
[Z1L1L2∗L3L4∗∗L5]≥0, | (3.5) |
[Z2M1M2∗M3M4∗∗M5]≥0, | (3.6) |
[Z3N1N2∗N3N4∗∗N5]≥0, | (3.7) |
[Z4S1S2∗S3S4∗∗S5]≥0, | (3.8) |
[Z5T1T2∗T3T4∗∗T5]≥0, | (3.9) |
[Z6U1U2∗U3U4∗∗U5]≥0. | (3.10) |
Proof. We begin by demonstrating the asymptotic stability of the system (3.3) under the constraints of the theorem. Let us partition the constant matrix A into its components
A=A1+A2, | (3.11) |
where A1,A2∈Rn×n are real constant matrices in order to improve the bounds of the discrete delay.
After that, we rewrite the system (3.3) using transformation method, so we achieve the following equivalents form:
x(k+1)=x(k)+y(k), | (3.12) |
y(k)=(A1−I)x(k)+A2x(k−τ(k))−A1k−1∑i=k−τ(k)y(i)+Bf(x(k))+Cg(x(k−h(k)))+DM∑i=1δ(i)x(k−i). | (3.13) |
Design and implement the following Lyapunov-Krasovskii functional as follows:
V(k)=10∑i=1Vi(k), | (3.14) |
where
V1(k)=xT(k)P1x(k),V2(k)=k−1∑i=k−h2xT(i)P2x(i)+−h1∑i=−h2+1k−1∑j=k+ixT(j)P2x(j),V3(k)=h20∑i=−h2+1k−1∑j=k+i−1yT(j)P4y(j),V4(k)=h1−1∑i=−h1k−1∑j=k+iyT(j)R1y(j)+(h2−h1)−h1−1∑i=−h2k−1∑j=k+iyT(j)R2y(j),V5(k)=h1−1∑i=−h1k−1∑j=k+iyT(j)Z1y(j)+h2−1∑i=−h2k−1∑j=k+iyT(j)Z2y(j)+(h2−h1)−h1−1∑i=−h2k−1∑j=k+iyT(j)Z3y(j),V6(k)=M∑i=1δ(i)k−1∑j=k−ixT(j)P6x(j),V7(k)=k−1∑i=k−τ2xT(i)P3x(i)+−τ1∑i=−τ2+1k−1∑j=k+ixT(j)P3x(j),V8(k)=τ20∑i=−τ2+1k−1∑j=k+i−1yT(j)P5y(j),V9(k)=τ1−1∑i=−τ1k−1∑j=k+iyT(j)R3y(j)+(τ2−τ1)−τ1−1∑i=−τ2k−1∑j=k+iyT(j)R4y(j),V10(k)=τ1−1∑i=−τ1k−1∑j=k+iyT(j)Z4y(j)+τ2−1∑i=−τ2k−1∑j=k+iyT(j)Z5y(j)+(τ2−τ1)−τ1−1∑i=−τ2k−1∑j=k+iyT(j)Z6y(j). |
Evaluating the forward difference of Vi(k)(i=1,2,…,10), along the trajectory of system (3.3) is given by
ΔV(k)=10∑i=1ΔVi(k). | (3.15) |
Let us define for i=1,2,…,10,
ΔVi(k)=Vi(k+1)−Vi(k), |
where
ΔV1(k)=[x(k)+y(k)]TP1[x(k)+y(k)]−xT(k)P1x(k)+[2xT(k)QT1+2yT(k)QT2+2xT(k−τ(k))QT3+2k−1∑i=k−τ(k)yT(i)QT4+2f(x(k))TQT5+2g(x(k−h(k)))TQT6+2(+∞∑i=1δ(i)x(k−i))TQT7]×[−y(k)+(A1−I)x(k)+A2x(k−τ(k))−A1k−1∑i=k−τ(k)y(i)+Bf(x(k))+Cg(x(k−h(k)))+D+∞∑i=1δ(i)x(k−i)], | (3.16) |
ΔV2(k)=xT(k)P2x(k)−xT(k−h2)P2x(k−h2)+−h1∑i=−h2+1[xT(k)P2x(k)−xT(k+i)P2x(k+i)]=(h12+1)xT(k)P2x(k)−xT(k−h2)P2x(k−h2)−k−h1∑i=k−h2+1xT(i)P2x(i)≤(h12+1)xT(k)P2x(k)−xT(k−h2)P2x(k−h2). | (3.17) |
Based on Lemma 4, the forward difference of V3(k) is calculated as
ΔV3(k)=h20∑i=−h2+1[yT(k)P4y(k)−yT(k+i−1)P4y(k+i−1)]≤h22yT(k)P4y(k)−k−1∑i=k−h2yT(i)P4k−1∑i=k−h2y(i)≤h22yT(k)P4y(k)−k−1∑i=k−h(k)yT(i)P4k−1∑i=k−h(k)y(i). | (3.18) |
ΔV4(k)=h21yT(k)R1y(k)+h212yT(k)R2y(k)−h1k−1∑i=k−h1yT(i)R1y(i)−h12k−h1−1∑i=k−h2yT(i)R2y(i). | (3.19) |
ΔV5(k)=h1−1∑i=−h1[yT(k)Z1y(k)−yT(k+i)Z1y(k+i)]+h2−1∑i=−h2[yT(k)Z2y(k)−yT(k+i)Z2y(k+i)]+h12−h1−1∑i=−h2[yT(k)Z3y(k)−yT(k+i)Z3y(k+i)]=h21yT(k)Z1y(k)−h1k−1∑i=k−h1yT(i)Z1y(i)+h22yT(k)Z2y(k)−h2k−1∑i=k−h2yT(i)Z2y(i)+h212yT(k)Z3y(k)−h12k−h1−1∑i=k−h2yT(i)Z3y(i). | (3.20) |
ΔV6(k)≤xT(k)(ξP6)x(k)−[M∑i=1δ(i)x(k−i)]T(1ξP6)[M∑i=1δ(i)x(k−i)]. | (3.21) |
ΔV7(k)=xT(k)P3x(k)−xT(k−τ2)P3x(k−τ2)+−τ1∑i=−τ2+1[xT(k)P3x(k)−xT(k+i)P3x(k+i)]≤(τ12+1)xT(k)P3x(k)−xT(k−τ2)P3x(k−τ2). | (3.22) |
ΔV8(k)≤τ22yT(k)P5y(k)−k−1∑i=k−τ2yT(i)P5k−1∑i=k−τ2y(i)≤τ22yT(k)P5y(k)−k−1∑i=k−τ(k)yT(i)P5k−1∑i=k−τ(k)y(i). | (3.23) |
ΔV9(k)=τ21yT(k)R3y(k)+τ212yT(k)R4y(k)−τ1k−1∑i=k−τ1yT(i)R3y(i)−τ12k−τ1−1∑i=k−τ2yT(i)R4y(i). | (3.24) |
ΔV10(k)=τ21yT(k)Z4y(k)−τ1k−1∑i=k−τ1yT(i)Z4y(i)+τ22yT(k)Z5y(k)−τ2k−1∑i=k−τ2yT(i)Z5y(i)+τ212yT(k)Z6y(k)−τ12k−τ1−1∑i=k−τ2yT(i)Z6y(i). | (3.25) |
By Lemma 5, four terms from ΔV4(k) and ΔV9(k) can each be driven as
k−1∑i=k−h1yT(i)R1y(i)≥1h1[x(k)−x(k−h1)]TR1[x(k)−x(k−h1)]+3h1[x(k)+x(k−h1)−2h1+1k∑i=k−h1x(i)]TR1[⋆]+5h1[x(k)−x(k−h1)+6h1+1k∑i=k−h1x(i)−12(h1+2)(h1+1)0∑i=−h1k∑j=k+ix(i)]TR1[⋆],k−h1−1∑i=k−h2yT(i)R2y(i)≥1h12[x(k−h1)−x(k−h2)]TR2[⋆]+3h12[x(k−h1)+x(k−h2)−2h12+1k−h1∑i=k−h2x(i)]TR2[⋆]+5h12[x(k−h1)−x(k−h2)+6h12+1k−h1∑i=k−h2x(i)−12(h12+2)(h12+1)−h1∑i=−h2k−k1∑j=k+ix(i)]TR2[⋆],k−1∑i=k−τ1yT(i)R3y(i)≥1τ1[x(k)−x(k−τ1)]TR3[x(k)−x(k−τ1)]+3τ1[x(k)+x(k−τ1)−2τ1+1k∑i=k−τ1x(i)]TR3[⋆]+5τ1[x(k)−x(k−τ1)+6τ1+1k∑i=k−τ1x(i)−12(τ1+2)(τ1+1)0∑i=−τ1k∑j=k+ix(i)]TR3[⋆],k−τ1−1∑i=k−τ2yT(i)R4y(i)≥1τ12[x(k−τ1)−x(k−τ2)]TR4[⋆]+3τ12[x(k−τ1)+x(k−τ2)−2τ12+1k−τ1∑i=k−τ2x(i)]TR4[⋆]+5τ12[x(k−τ1)−x(k−τ2)+6τ12+1k−τ1∑i=k−τ2x(i)−12(τ12+2)(τ12+1)−τ1∑i=−τ2k−k1∑j=k+ix(i)]TR4[⋆]. |
By Lemma 6, six terms from ΔV5(k) and ΔV10(k) can each be driven as
−k−1∑i=k−h1yT(i)Z1y(i)≤[x(k)x(k−h(k))x(k−h1)]T[L1+LT1−LT1+L20∗L1+LT1−L2−LT2−LT1+L2∗∗−L2−LT2][⋆]+h1[x(k)x(k−h(k))x(k−h1)]T[L3L40∗L3+L5L4∗∗L5][⋆], |
−k−1∑i=k−h1yT(i)Z2y(i)≤[x(k)x(k−h(k))x(k−h2)]T[M1+MT1−MT1+M20∗M1+MT1−M2−MT2−MT1+M2∗∗−M2−MT2][⋆]+h1[x(k)x(k−h(k))x(k−h2)]T[M3M40∗M3+M5M4∗∗M5][⋆],−k−h1−1∑i=k−h2yT(i)Z3y(i)≤[x(k−h1)x(k−h(k))x(k−h2)]T[N1+NT1−NT1+N20∗N1+NT1−N2−NT2−NT1+N2∗∗−N2−NT2][⋆]+h1[x(k−h1)x(k−h(k))x(k−h2)]T[N3N40∗N3+N5N4∗∗N5][⋆],−k−1∑i=k−h1yT(i)Z4y(i)≤[x(k)x(k−τ(k))x(k−τ1)]T[S1+ST1−ST1+S20∗S1+ST1−S2−ST2−ST1+S2∗∗−S2−ST2][⋆]+h1[x(k)x(k−τ(k))x(k−τ1)]T[S3S40∗S3+S5S4∗∗S5][⋆], |
−k−1∑i=k−h1yT(i)Z5y(i)≤[x(k)x(k−τ(k))x(k−τ2)]T[T1+TT1−TT1+T20∗T1+TT1−T2−TT2−TT1+T2∗∗−T2−TT2][⋆]+h1[x(k)x(k−τ(k))x(k−τ2)]T[T3T40∗T3+T5T4∗∗T5][⋆],−k−h1−1∑i=k−h2yT(i)Z6y(i)≤[x(k−τ1)x(k−τ(k))x(k−τ2)]T[U1+UT1−UT1+U20∗U1+UT1−U2−UT2−UT1+U2∗∗−U2−UT2][⋆]+h1[x(k−τ1)x(k−τ(k))x(k−τ2)]T[U3U40∗U3+U5U4∗∗U5][⋆]. |
From Assumption 2, we have
[x(k)f(x(k))]T[F1ΛT1−F2Λ1−F2Λ1Λ1][x(k)f(x(k))]≤0, | (3.26) |
and
[x(k)g(x(k))]T[G1ΛT2−G2Λ2−G2Λ2Λ2][x(k)g(x(k))]≤0, | (3.27) |
where
Λ1=diag{λ11,λ12,…,λ1n},Λ2=diag{λ21,λ22,…,λ2n},F1=diag{F+1F1,F+2F2,…,F+nFn},F2=diag{F+1+F−12,F+2+F−22,…,F+n+F−n2},G1=diag{G+1G1,G+2G2,…,G+nGn},G2=diag{G+1+G−12,G+2+G−22,…,G+n+G−n2}, |
where λ1i,λ2i,F−i,F+i,G−i, and G+i (i = 1, 2, ..., n) are known real constants.
According to (3.12)–(3.27), it is straightforward to see that
ΔV(k)≤ξT(k)Πξ(k), | (3.28) |
where ξ(k)=[xT(k),yT(k),xT(k−h(k)),xT(k−h1),xT(k−h2),k−1∑i=k−h(k)yT(i),k∑i=k−h1xT(i), 0∑i=−h1k∑j=k+ix(i),k∑i=k−h12xT(i),0∑i=−h2k−h1∑j=k+ixT(i),xT(k−τ(k)),xT(k−τ1),xT(k−τ2), k−1∑i=k−τ(k)yT(i),k∑i=k−τ1xT(i),0∑i=−τ1k∑j=k+ixT(i),k∑i=k−τ2xT(i),0∑i=−τ2k−τ1∑j=k+ixT(i),fT(x(k)), gT(x(k−h(k))),(M∑i=1δ(i)x(k−i))T]T, and Π is defined in (3.2). From (3.4)–(3.10), system (3.3) is asymptotically stable, as defined in Definition 2. The theorem is now complete in its proof.
If leakage delay term disappears, that is τ(k)=0, the neural networks system (3.3) reduces to
{x(k+1)=Ax(k)+Bf(x(k))+Cg(x(k−h(k)))+DM∑i=1δ(i)x(k−i),x(s)=ϕ(s),s∈{−h2,−h2+1,…,−1,0,}. | (3.29) |
The delay-dependent stability criterion for the system in (3.29) can be directly deduced from Theorem 1.
We introduce the following notations for later use
ˉΠ=[ˉΠi,j]13×13, | (3.30) |
where ˉΠi,j=ˉΠTj,i=Πi,j, i,j=1,2,3,…,10,19,20,21, and it is presented in the following theorem.
Theorem 2. The system (3.29) is asymptotically stable, if there exist positive definite symmetric matrices Pi,Qj,Rk,Zl, i=1,2,4=,6,j=1,2,3,…,6, k=1,2,l=1,2,3 and any appropriate dimensional matrices Λ1,Λ2, satisfying the following LMIs
ˉΠ<0, | (3.31) |
[Z1L1L2∗L3L4∗∗L5]≥0, | (3.32) |
[Z2M1M2∗M3M4∗∗M5]≥0, | (3.33) |
[Z3N1N2∗N3N4∗∗N5]≥0. | (3.34) |
Proof. Based on the same method as Theorem 1, but for this estimation we do not decompose matrix A, thereby rewriting the system (3.29) with model transformation method as the following descriptor system
x(k+1)=x(k)+y(k), | (3.35) |
y(k)=(A−I)x(k)+Bf(x(k))+Cg(x(k−h(k)))+DM∑i=1δ(i)x(k−i). | (3.36) |
Construct the following Lyapunov-Krasovskii functional as
V(k)=6∑i=1Vi(k), | (3.37) |
where
V1(k)=xT(k)P1x(k),V2(k)=k−1∑i=k−h2xT(i)P2x(i)+−h1∑i=−h2+1k−1∑j=k+ixT(j)P2x(j),V3(k)=h20∑i=−h2+1k−1∑j=k+i−1yT(j)P4y(j),V4(k)=h1−1∑i=−h1k−1∑j=k+iyT(j)R1y(j)+(h2−h1)−h1−1∑i=−h2k−1∑j=k+iyT(j)R2y(j),V5(k)=h1−1∑i=−h1k−1∑j=k+iyT(j)Z1y(j)+h2−1∑i=−h2k−1∑j=k+iyT(j)Z2y(j)+(h2−h1)−h1−1∑i=−h2k−1∑j=k+iyT(j)Z3y(j),V6(k)=M∑i=1δ(i)k−1∑j=k−ixT(j)P6x(j). |
When the forward difference of V(k) is calculated, it is defined as
ΔV(k)=6∑i=1ΔVi(k). | (3.38) |
Let us define for i=1,2,…,6,
ΔVi(k)=Vi(k+1)−Vi(k). |
We can estimate V1(k) as follows.
ΔV1(k)=[x(k)+y(k)]TP1[x(k)+y(k)]−xT(k)P1x(k)+[2xT(k)QT1+2yT(k)QT2+2f(x(k))TQT4+2g(x(k−h(k)))TQT5+2(M∑i=1δ(i)x(k−i))TQT6][−y(k)+(A−I)x(k)+Bf(x(k))+Cg(x(k−h(k)))+DM∑i=1δ(i)x(k−i)]. | (3.39) |
The proof after this step is omitted since it is analogous to the derivation of the Theorem 1.
When D=0, the neural networks system (3.3) becomes
{x(k+1)=Ax(k−τ(k))+Bf(x(k))+Cg(x(k−h(k))),k∈Z+,x(s)=ϕ(s),s∈{−ρ,−ρ+1,…,−1,0,}. | (3.40) |
The delay-dependent stability criterion for the system in (3.40) can be directly deduced from Theorem 1.
We introduce the following notations for later use
ˆΠ=[ˆΠi,j]20×20, | (3.41) |
where ˆΠi,j=ˆΠTj,i=Πi,j, i,j=1,2,3,…,20, and it is presented in the following corollary.
Corollary 1. For given integers h1,h2 satisfying 0<h1≤h2, system (3.40) is asymptotically stable for 0<h1≤h(k)≤h2, if there exist positive definite matrices Pi,Qj,Rk,Zi, i=1,2,3,…,6,j=1,2,3,…,6, k=1,2,3,4, and any appropriate dimensional matrices Λ1,Λ2, satisfying the following LMIs, satisfying the following LMIs.
ˆΠ<0, | (3.42) |
[Z1L1L2∗L3L4∗∗L5]≥0, | (3.43) |
[Z2M1M2∗M3M4∗∗M5]≥0, | (3.44) |
[Z3N1N2∗N3N4∗∗N5]≥0, | (3.45) |
[Z4S1S2∗S3S4∗∗S5]≥0, | (3.46) |
[Z5T1T2∗T3T4∗∗T5]≥0, | (3.47) |
[Z6U1U2∗U3U4∗∗U5]≥0. | (3.48) |
Proof. The proof has been skipped since it is almost identical to the derivation of Theorem 1 except that the matrix D is not included in the proof.
If leakage delay term disappears and D=0, the neural networks (3.29) becomes
{x(k+1)=Ax(k)+Bf(x(k))+Cg(x(k−h(k))),k∈Z+x(s)=ϕ(s),s∈{−h2,−h2+1,…,−1,0,}. | (3.49) |
The delay-dependent stability criterion for the system in (3.49) can be directly deduced from Theorem 2.
We introduce the following notations for later use
ˆˉΠ=[ˆˉΠi,j]12×12, | (3.50) |
where ˆˉΠi,j=ˆˉΠTj,i=Πi,j, i,j=1,2,3,…,10,19,20, and it is presented in the following corollary.
Corollary 2. For given integers h1,h2 satisfying 0<h1≤h2, system (3.49) is asymptotically stable for 0<h1≤h(k)≤h2, if there exist positive definite matrices Pi,Qj,Rk,Zl, i=1,2,4,6,j=1,2,3,…,6,k=1,2,l=1,2,3 and any appropriate dimensional matrices Λ1,Λ2, satisfying the following LMIs
ˆˉΠ<0, | (3.51) |
[Z1L1L2∗L3L4∗∗L5]≥0, | (3.52) |
[Z2M1M2∗M3M4∗∗M5]≥0, | (3.53) |
[Z3N1N2∗N3N4∗∗N5]≥0. | (3.54) |
Proof. The proof is removed since it is comparable to the derivation of Theorem 2 without D and hence does not need to be included.
For system (2.1), we derive robust asymptotic stability using Theorem 1 and the following notations, which will come in handy later.
SnT=[ΓTQ1ΓTQ200000000ΓTQ300ΓTQ40000ΓTQ5ΓTQ6ΓTQ7], | (3.55) |
Nn=[H1000000000000−H10000H2H3H4]. | (3.56) |
Theorem 3. The system (3.1) is robustly asymptotically stable, if there exist positive definite symmetric matrices Pi,Qj,Rk, i=1,2,…,9, j=1,2,…,5, k=1,2,…,8, any appropriate dimensional matrices J,T1,T2,Sl, Jm, Km, Mm,Nm, l=1,2,…,4, m=1,2,3 and any positive real constant δ satisfying the following LMIs
[ΠSnδNnT∗−δIδET∗∗−δI]<0, | (3.57) |
[Z1L1L2∗L3L4∗∗L5]≥0, | (3.58) |
[Z2M1M2∗M3M4∗∗M5]≥0, | (3.59) |
[Z3N1N2∗N3N4∗∗N5]≥0, | (3.60) |
[Z4S1S2∗S3S4∗∗S5]≥0, | (3.61) |
[Z5T1T2∗T3T4∗∗T5]≥0, | (3.62) |
[Z6U1U2∗U3U4∗∗U5]≥0. | (3.63) |
Proof. Together with LMIs of Theorem 1, by replacing A1, B, C and D in (3.4) with A1+ΔA(k), B+ΔB(k), C+ΔC(k) and D+ΔD(k) in (2.5), respectively. Then, we find that condition (3.57) is equivalent to the following condition
Π+SnΔ(k)Nn+NnTΔ(k)TSnT<0. | (3.64) |
By using Lemma 1, we obtain that (3.64) is equivalent to the LMIs as follows
[ΠSnδNnT∗−δIδET∗∗−δI]<0, | (3.65) |
where δ is a positive real constant. From Theorem 1 and conditions (3.57)–(3.63), it follows from Definition 1 that system (3.1) is robustly asymptotically stable. This completes the proof of the theorem.
If D=0, then system (3.1) reduces to the following system
{x(k+1)=(A+ΔA(k))x(k−τ(k))+(B+ΔB(k))f(x(k))+(C+ΔC(k))g(x(k−h(k)))x(k)=ϕ(k),k=−ρ,−ρ+1,…,0. | (3.66) |
The delay-dependent stability criteria for the system in (3.66) can be directly deduced from Theorem 3.
We introduce the following notations for later use
^SnT=[ΓTQ1ΓTQ200000000ΓTQ300ΓTQ40000ΓTQ5ΓTQ6], | (3.67) |
^Nn=[H1000000000000−H10000H2H3]. | (3.68) |
and it is presented in the following corollary.
Corollary 3. The system (3.66) is robustly asymptotically stable, if there exist positive definite symmetric matrices Pi,Qj,Rk, i=1,2,…,9, j=1,2,…,4,k=1,2,…,8, any appropriate dimensional matrices J,T1,T2,Sl, Jm,Km,Mm,Nm, l=1,2,…,4, m=1,2,3 and any positive real constant δ such that the following LMIs hold
[ˆΠˆSδˆNT∗−δIδET∗∗−δI]<0, | (3.69) |
[Z1L1L2∗L3L4∗∗L5]≥0, | (3.70) |
[Z2M1M2∗M3M4∗∗M5]≥0, | (3.71) |
[Z3N1N2∗N3N4∗∗N5]≥0, | (3.72) |
[Z4S1S2∗S3S4∗∗S5]≥0, | (3.73) |
[Z5T1T2∗T3T4∗∗T5]≥0, | (3.74) |
[Z6U1U2∗U3U4∗∗U5]≥0. | (3.75) |
Proof. Together with LMI results of Corollary 1, by replacing A1 and B in (3.42) with A1+ΔA(k) and B+ΔB(k) in (2.5), respectively. Then, we find that condition (3.69) is equivalent to the following condition
ˆΠ+ˆSΔ(k)ˆN+ˆNTΔ(k)TˆST<0. | (3.76) |
By using Lemma 1, we obtain that (3.76) is equivalent to the LMI as follows
[ˆΠˆSδˆNT∗−δIδET∗∗−δI]<0, | (3.77) |
where δ is a positive real constant. From Corollary 1 and conditions (3.69)–(3.75), system (3.66) is robustly asymptotically stable. The proof is completed.
This subsection focuses on the robust passivity analysis of the uncertain linear discrete-time system with interval discrete and distributed time-varying delays (2.1). The LMI-based conditions will be derived using the Lyapunov technique.
First and foremost, we introduce the following notations for later use
SnT0=[SnT0],Nn0=[Nn0],˘Π=[˘Πi,j]22×22, |
where ˘Πi,j=˘ΠTj,i=Πi,j, i,j=1,2,3,…,22,
˘Π1,22=QT8(A1−I)+Q1˘Π2,22=−QT8+Q2,˘Π11,22=−ATz+QT8A2+Q3,˘Π14,22=−QT8A1+Q4,˘Π19,22=−BTz+QT8B+Q5,˘Π20,22=−CTz+QT8C+Q6,˘Π21,22=−DTz+QT8D+Q7,˘Π22,22=−γI+QT8+Q8, |
and others are equal to zero.
Theorem 4. The system (2.1) is robustly passive, if there exist positive definite symmetric matrices Pi,Qj,Rk, i=1,2,…,10,j=1,2,…,6,k=1,2,…,8, any appropriate dimensional matrices J,T1,T2,Sl, Jm,Km,Mm,Nm, l=1,2,…,4,m=1,2,3 and any positive real constant δ,γ satisfying the following LMIs
[˘ΠSn0δNnT0∗−δIδET∗∗−δI]<0, | (3.78) |
[Z1L1L2∗L3L4∗∗L5]≥0, | (3.79) |
[Z2M1M2∗M3M4∗∗M5]≥0, | (3.80) |
[Z3N1N2∗N3N4∗∗N5]≥0, | (3.81) |
[Z4S1S2∗S3S4∗∗S5]≥0, | (3.82) |
[Z5T1T2∗T3T4∗∗T5]≥0, | (3.83) |
[Z6U1U2∗U3U4∗∗U5]≥0. | (3.84) |
Proof. The proof follows from Theorem 1 and Theorem 3 by choosing the Lyapunov-Krasovskii functional (3.14) and the forward differences in (3.16)–(3.27) with (2.1)–(2.8) and conditions (3.78)–(3.84), it follows that
ΔV(k)+(−2zT(k)w(k)−γwT(k)w(k))≤0. | (3.85) |
Given a positive integer l and summing both sides of (3.85) from 0 to l with respect to k results in
l∑k=0ΔV(k)+l∑k=0(−2zT(k)w(k)−γwT(k)w(k))≤0,V(l+1)−V(0)−2l∑k=0zT(k)w(k)−γl∑k=0wT(k)w(k)≤0. |
Under the zero condition, we have
−γl∑k=0wT(k)w(k)≤2l∑k=0zT(k)w(k). | (3.86) |
Therefore from (3.86), it is easy to get the inequality in Definition 3. Hence it can conclude that the system (2.1) is robustly passive. The proof of this theorem is completed.
If D=Dz=0, then system (2.1) reduces to the following system
{x(k+1)=(A+ΔA(k))x(k−τ(k))+(B+ΔB(k))f(x(k))+(C+ΔC(k))g(x(k−h(k)))+w(k),z(k)=Azx(k−τ(k))+Bzf(x(k))+Czg(x(k−h(k)))x(k)=ϕ(k),k=−ρ,−ρ+1,…,0. | (3.87) |
The delay-dependent passivity criterion for the system in (3.87) can be directly deduced from Theorem 4. We introduce the following notations for later use
^SnT0=[^SnT0],^Nn0=[^Nn0],˜Π=[˜Πi,j]21×21, |
where ˜Πi,j=˜ΠTj,i=ˆΠi,j, i,j=1,2,3,…,21,
˜Π1,21=QT8(A1−I)+Q1˜Π2,21=−QT8+Q2˜Π11,21=−ATz+QT8A2+Q3,˜Π14,21=−QT8A1+Q4,˜Π19,21=−BTz+QT8B+Q5,˜Π20,21=−CTz+QT8C+Q6,˜Π21,21=−γI+QT8+Q8, |
and others are equal to zero.
Corollary 4. The system (3.87) is robustly passive if there exist positive definite symmetric matrices Pi,Qj,Rk, i=1,2,…,9, j=1,2,…,4,6, k=1,2,…,8, any appropriate dimensional matrices J,T1,T2,Sl, Jm,Km,Mm, Nm, l=1,2,…,4, m=1,2,3 and any positive real constant δ,γ such that the following LMIs hold
[˜Π^Sn0δ^NnT0∗−δIδET∗∗−δI]<0, | (3.88) |
[Z1L1L2∗L3L4∗∗L5]≥0, | (3.89) |
[Z2M1M2∗M3M4∗∗M5]≥0, | (3.90) |
[Z3N1N2∗N3N4∗∗N5]≥0, | (3.91) |
[Z4S1S2∗S3S4∗∗S5]≥0, | (3.92) |
[Z5T1T2∗T3T4∗∗T5]≥0, | (3.93) |
[Z6U1U2∗U3U4∗∗U5]≥0. | (3.94) |
Proof. The proof is omitted since it is analogous to the derivation of Theorem 4 with Definition 3.
If If leakage delay term disappears and D=Dz=0, then system (2.1) reduces to the following system
{x(k+1)=Ax(k−τ(k))+Bf(x(k))+Cg(x(k−h(k)))+w(k),z(k)=Azx(k−τ(k))+Bzf(x(k))+Czg(x(k−h(k)))x(k)=ϕ(k),k=−ρ,−ρ+1,…,0. | (3.95) |
The delay-dependent passivity criterion for the system in (3.95) can be directly deduced from Theorem 4. We introduce the following notations for later use
˜Π=[˜Πi,j]21×21, |
where ˜Πi,j=˜ΠTj,i=ˆΠi,j, i,j=1,2,3,…,21,
˜Π1,21=QT8(A1−I)+Q1˜Π2,21=−QT8+Q2˜Π11,21=−ATz+QT8A2+Q3,˜Π14,21=−QT8A1+Q4,˜Π19,21=−BTz+QT8B+Q5,˜Π20,21=−CTz+QT8C+Q6,˜Π21,21=−γI+QT8+Q8, |
and others are equal to zero.
Corollary 5. The system (3.95) is passive, if there exist positive definite symmetric matrices Pi,Qj,Rk, i=1,2,…,9, j=1,2,…,4,6, k=1,2,…,8, any appropriate dimensional matrices J,T1,T2,Sl, Jm,Km,Mm, Nm, l=1,2,…,4, m=1,2,3 and any positive real constant γ such that the following LMIs hold
˜Π<0, | (3.96) |
[Z1L1L2∗L3L4∗∗L5]≥0, | (3.97) |
[Z2M1M2∗M3M4∗∗M5]≥0, | (3.98) |
[Z3N1N2∗N3N4∗∗N5]≥0, | (3.99) |
[Z4S1S2∗S3S4∗∗S5]≥0, | (3.100) |
[Z5T1T2∗T3T4∗∗T5]≥0, | (3.101) |
[Z6U1U2∗U3U4∗∗U5]≥0. | (3.102) |
Proof. As with the derivation of Theorem 4 with Definition 3, the proof is skipped here for simplicity's sake.
Remark 1. The problem of new delay-range-dependent asymptotic stability criteria for uncertain discrete-time neural networks with interval discrete, distributed, and leakage time-varying delays (Theorems 1–3, Corollarys 1–3) is studied. Moreover, new delay-range-dependent passivity criteria are also investigated for uncertain discrete-time neural networks with interval discrete, distributed, and leakage time-varying delays (Theorem 4, Corollarys 5 and 6). We use a mixed techniques such as new inequalities, Jensen inequality, coefficient matrix decomposition technique, utilization of zero equation, mixed model transformation, and reciprocally convex combination. Using the above new LKFs and the lemmas leads to less conservatism of the obtained results than in published literature, as presented via numerical examples.
In this part, we will provide numerical examples that will illustrate the efficacy and application of the techniques that are being discussed.
Example 1. Illustrate the effectiveness of the proposed stability criterion (Theorem 1) for the discrete-time system subjected to norm-bounded uncertainties (3.3) with parameters as follows
A1=A2=[0.20000.050000.15],B=[−0.30.10.20.20.200−0.1−0.4],C=[0.40.2−0.100.20.3−0.100.2],D=[−0.20.100.20.30.20−0.20.2]. |
For the activation functions
F1=G1=[000000000],F2=G2=[0.40000.30000.3]. |
In addition, we choose 2≤τ(k)≤8. Then, by using the MATLAB LMI Toolbox, we solve LMI (3.4)–(3.10), and the corresponding values of the permissible upper limits of h2 for a range of h1 values from 4 to 20 are also computed and given in Table 1 as follows:
h1 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 12 | 15 | 20 |
Theorem2 | 65 | 64 | 64 | 63 | 62 | 62 | 60 | 58 | 56 | 54 |
Example 2. Consider system (3.29) with Theorem 2 and the following parameters as
A1=[0.20000.040000.1],A2=[0.20000.060000.2],B=[−0.30.10.20.20.200−0.1−0.4],C=[0.40.2−0.100.20.3−0.100.2],D=[−0.20.100.20.30.20−0.20.2]. |
For the activation functions
F1=G1=[000000000],F2=G2=[0.40000.30000.3]. |
Then, by using the MATLAB LMI Toolbox, we solve LMI (3.31)–(3.34), and the corresponding values of the permissible upper limits of h2 for a range of h1 values from 4 to 20 are also computed and given in Table 2 as follows:
h1 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 12 | 15 | 20 |
Theorem2 | 86 | 86 | 87 | 87 | 87 | 87 | 88 | 88 | 90 | 92 |
Example 3. Consider system (3.49) with Corollary 2 and the following parameters
A1=[0.4000.45],A2=[0.4000.45],B=[0.001000.005],C=[−0.10.01−0.2−0.1]. |
For the activation functions
F1=G1=[0000],F2=G2=[0.5000.5]. |
Table 3 lists the findings of the maximum delay limits for different h2 for system (3.49). A comparison of current outcomes with those from the past may be seen in Table 3. Table 3 shows that our findings for this particular case provide higher upper limits to the time delay than those in the references [10,13,14,30,31].
Methods∖h1= | 6 | 8 | 10 | 15 | 20 |
Theorem 1 [13] | 20 | 21 | 21 | 23 | 26 |
Theorem 2 [14] | 19 | 20 | 21 | 24 | 27 |
Corollary 1 [31] | 20 | 20 | 21 | 24 | 27 |
Theorem 1 [10] | 21 | 21 | 22 | 24 | 27 |
Theorem 2 [10] | 20 | 21 | 22 | 24 | 27 |
Corollary 3.1 [30] | 20 | 22 | 24 | 29 | 34 |
Corollary 2 | 23 | 24 | 25 | 30 | 35 |
Example 4. Illustrate the effectiveness of the proposed robust stability criterion (Corollary 3) for the uncertain discrete-time system subjected to norm-bounded uncertainties, consider the following system
x(k+1)=[0.8+α(k)000.9]x(k)+[−0.10−0.1−0.1]x(k−h(k)), |
where |α(k)|<α. The uncertain system can be expressed in the form of (3.66) with the following parameters
A1=A2=[0.4000.45],Γ=[α000],H1=[1000],H2=H3=E=[0000]. |
For given interval [h1,h2], the values of α such that the robust asymptotic stability of this system are listed in Table 4. From the table, it is clear that the proposed robust stability criterion accommodates a higher perturbation bound for a given delay range than [3,7,29,40] without losing stability.
[h1,h2] | [2, 7] | [3, 9] | [5, 10] | [6, 12] | [10, 15] |
Gao and Chen [3] | 0.190 | 0.145 | 0.131 | 0.090 | 0.065 |
Huang and Fenh [7] | 0.192 | 0.154 | 0.142 | 0.114 | 1.102 |
Ramakrishnan and Ray [29] | 0.195 | 0.165 | 0.154 | 0.131 | 1.112 |
Wang et al. [40] | 0.205 | 0.172 | 0.161 | 0.138 | - |
Corollary 3 | 0.210 | 0.179 | 0.168 | 0.141 | 1.126 |
Example 5. Illustrate the effectiveness of the proposed stability criterion (Theorem 4) for the discrete-time system subjected to norm-bounded uncertainties (2.1) with parameters as follows
A1=A2=[0.20000.050000.15],B=[−0.30.10.20.20.200−0.1−0.4],C=[0.20.1−0.100.10−0.100.2],D=[−0.1−0.100.10.10.10−0.10.1], |
H1=H2=[0.10000.10000.1],H3=H4=[0.050000.050000.05], |
E=[000000000],Γ=[100010001],Az=[0.20000.080000.1],Bz=[0.10000.20000.1],Cz=[0.20000.2000−0.1],Dz=[−0.10000.10000.1]. |
For the activation functions
F1=G1=[0.10000.10000.1],F2=G2=[0.40000.30000.3]. |
In addition, we choose 2≤τ(k)≤8 and δ(i)=1,i=1,2,...,M. Using the MATHLAB tools to solve LMIs (3.78)–(3.84), and the corresponding values of the permissible upper limits of h2 for a range of h1 values from 4 to 20, we are also computed and given in Table 5 as follows:
h1 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 12 | 15 | 20 |
Theorem4 | 30 | 30 | 28 | 28 | 27 | 27 | 26 | 25 | 24 | 22 |
Remark 2. An important property in discrete delayed system theory is stability which applies to analyzing properties of passivity of various discrete delayed systems. In recent years, stability and passivity properties have also been related to the different discrete delayed systems [3,7,10,13,14,29,30,31,40]. Moreover, in this work, we use refined inequality and mixed techniques to improve the stability and passivity criteria. By applying the abovementioned methods, we obtain less conservative results than the others [3,7,10,13,14,29,30,31,40].
This paper explored discrete-time neural networks with mixed interval time-varying delays for asymptotic stability and passivity. It has also examined how discrete-time neural networks with time interval variations have resilient asymptotic stability and passivity analysis. The study was carried out using a technique that incorporated the enhanced Lyapunov-Krasovskii functional, mixed model transformation, decomposition approach of the coefficient matrix, and usage of zero equations. A novel set of delays-range-dependent robust asymptotic stability criteria was developed and constructed using LMIs. We can demonstrate numerically that our criteria are less conservative than those found in the current literature. Another numerical example has been provided to show the applicability of the discoveries that have been proposed.
This research has received funding support from the NSRF via the Program Management Unit for Human Resources & Institutional Development, Research and Innovation [grant number B05F650018].
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
[1] |
S. Arik, Stability analysis of delayed neural networks, IEEE Trans. Circuits Syst., 47 (2000), 1089–1092. http://dx.doi.org/10.1109/81.855465 doi: 10.1109/81.855465
![]() |
[2] |
J. Chen, J. H. Park, S. Xu, Improved stability criteria for discrete-time delayed neural networks via novel Lyapunov-Krasovskii functionals, IEEE Trans. Cybernetics, 52 (2022), 11885–11862. http://dx.doi.org/10.1109/TCYB.2021.3076196 doi: 10.1109/TCYB.2021.3076196
![]() |
[3] |
H. Gao, T. Chen, New results on stability of discrete-time systems with time-varying state delay, IEEE Trans. Automat. Control, 52 (2007), 328–334. https://doi.org/10.1109/TAC.2006.890320 doi: 10.1109/TAC.2006.890320
![]() |
[4] |
H. Gao, T. Chen, T. Chai, Passivity and passification for networked control systems, SIAM J. Control Optim., 46 (2007), 1299–1322. https://doi.org/10.1137/060655110 doi: 10.1137/060655110
![]() |
[5] | K. Gopalsamy, Stability and oscillations in delay differential equations of population dynamics, Dordrecht: Kluwer Academic Publishers Group, 1992. |
[6] |
K. Gopalsamy, Leakage delays in BAM, J. Math. Anal., 325 (2007), 1117–1132. https://doi.org/10.1016/j.jmaa.2006.02.039 doi: 10.1016/j.jmaa.2006.02.039
![]() |
[7] |
H. Huang, G. Feng, Improved approach to delay-dependent stability analysis of discrete-time systems with time-varying delay, IET Control Theory Appl., 4 (2010), 2152–2159. http://dx.doi.org/ 10.1049/iet-cta.2009.0225 doi: 10.1049/iet-cta.2009.0225
![]() |
[8] |
L. Jarina Banu, P. Balasubramaniam, K. Patnavelu, Robust stability analysis for discrete-time uncertain neural networks with leakage time-varying delay, Neurocomputing, 151 (2015), 808–816. https://doi.org/10.1016/j.neucom.2014.10.018 doi: 10.1016/j.neucom.2014.10.018
![]() |
[9] |
L. Jin, Y. He, L. Jiang, M. Wu, Extended dissipativity analysis for discrete-time delayed neural networks based on an extended reciprocally convex matrix inequality, Inf. Sci., 462 (2018), 357–366. https://doi.org/10.1016/j.ins.2018.06.037 doi: 10.1016/j.ins.2018.06.037
![]() |
[10] |
L. Jin, Y. He, M. Wu, Improved delay-dependent stability analysis of discrete-time neural networks with time-varying delay, J. Frankl. Inst., 354 (2017), 1922–1936. https://doi.org/10.1016/j.jfranklin.2016.12.027 doi: 10.1016/j.jfranklin.2016.12.027
![]() |
[11] |
O. M. Kwon, S. M. Lee, J. H. Park, E. J. Cha, New approaches on stability criteria or neural networks with interval time-varying delays, Appl. Math. Comput., 218 (2012), 9953–9964. https://doi.org/10.1016/j.amc.2012.03.082 doi: 10.1016/j.amc.2012.03.082
![]() |
[12] |
O. M. Kwon, M. J. Park, J. P. Park, S. M. Lee, E. J. Cha, Improved delay dependent stability criteria for discrete-time systems with time-varying delays, Circuits Syst. Signal Process., 32 (2013), 1949–1962. https://doi.org/10.1007/s00034-012-9543-6 doi: 10.1007/s00034-012-9543-6
![]() |
[13] |
O. M. Kwon, M. J. Park, J. P. Park, S. M. Lee, E. J. Cha, New criteriaon delay-dependent stability for discrete-time neural networks with time-varing delays, Neurocomputing, 121 (2013), 185–194. https://doi.org/10.1016/j.neucom.2013.04.026 doi: 10.1016/j.neucom.2013.04.026
![]() |
[14] |
O. M. Kwon, M. J. Park, J. P. Park, S. M. Lee, E. J. Cha, Stability analysis for discrete–time neural networks with time-varying and stochastic parameter uncertainties, Can. J. Phys., 93 (2015), 398–408. https://doi.org/10.1139/cjp-2014-0264 doi: 10.1139/cjp-2014-0264
![]() |
[15] |
C. Li, T. Huang, On the stability of nonlinear systems with leakage delay, J. Franklin Inst., 346 (2009), 366–377. https://doi.org/10.1016/j.jfranklin.2008.12.001 doi: 10.1016/j.jfranklin.2008.12.001
![]() |
[16] |
C. Li, H. Zhang, X. Liao, Passivity and passification of fuzzy systems with time delays, Comput. Math. Appl., 52 (2006), 1067–1078. https://doi.org/10.1016/j.camwa.2006.03.029 doi: 10.1016/j.camwa.2006.03.029
![]() |
[17] |
T. Li, L. Guo, C. Lin, A new criterion of delay-dependent stability for uncertain time-delay systems, IET Control Theory Appl., 1 (2007), 611–616. http://dx.doi.org/10.1049/iet-cta:20060235 doi: 10.1049/iet-cta:20060235
![]() |
[18] |
X. Li, J. Cao, Delay-dependent stability of neural networks of neutral type with time delay in the leakage term, Nonlinearity, 23 (2010), 1709–1726. http://dx.doi.org/10.1088/0951-7715/23/7/010 doi: 10.1088/0951-7715/23/7/010
![]() |
[19] |
X. D. Li, J. Shen, LMI approach for stationary oscillation of interval neural networks with discrete and distributed time varying delays under impulsive perturbations, IEEE Trans. Neural Networ., 21 (2010), 1555–1563. http://dx.doi.org/10.1109/TNN.2010.2061865 doi: 10.1109/TNN.2010.2061865
![]() |
[20] |
X. Li, X. Fu, P. Balasubramaniam, R. Rakkiyappan, Existence, uniqueness and stability analysis of recurrent neural networks with time delay in the leakage term under impulsive perturbations, Nonlinear Anal.: Real World Appl., 11 (2010), 4092–4108. https://doi.org/10.1016/j.nonrwa.2010.03.014 doi: 10.1016/j.nonrwa.2010.03.014
![]() |
[21] |
X. G. Liu, F. X. Wang, M. L. Tang, Auxiliary function-based summation inequalities and their applications to discrete-time systems, Automatica, 78 (2017), 211–215. https://doi.org/10.1016/j.automatica.2016.12.036 doi: 10.1016/j.automatica.2016.12.036
![]() |
[22] |
Y. Liu, Z. Wang, X. Liu, Asymptotic stability for neural networks with mixed time-delays: the discrete-time case, Neural Networks, 22 (2009), 67–74. https://doi.org/10.1016/j.neunet.2008.10.001 doi: 10.1016/j.neunet.2008.10.001
![]() |
[23] |
Y. Liu, Z. Wang, X. Liu, Global exponential stability of generalized recurrent neural networks with discrete and distributed delays, Neural Networks, 19 (2006), 667–675. https://doi.org/10.1016/j.neunet.2005.03.015 doi: 10.1016/j.neunet.2005.03.015
![]() |
[24] |
S. Mohamad, K. Gopalsamy, Exponential stability of continuous-time and discrete-time cellular neural networks with delays, Appl. Math. Comput., 135 (2003), 17–38. https://doi.org/10.1016/S0096-3003(01)00299-5 doi: 10.1016/S0096-3003(01)00299-5
![]() |
[25] |
P. T. Nam, H. Trinh, P. N. Pathirana, Discrete inequalities based on multiple auxiliary functions and their applications to stability analysis of time-delay systems, J. Frankl. Inst., 352 (2015), 5810–5831. https://doi.org/10.1016/j.jfranklin.2015.09.018 doi: 10.1016/j.jfranklin.2015.09.018
![]() |
[26] |
D. Nishanthi, L. Jarina Banu, P. Balasubramaniam, Robust guaranteed cost state estimation for discrete-time systems with random delays and random uncertainties, Int. J. Adapt. Control Signal Process., 31 (2017), 1361–1372. https://doi.org/10.1002/acs.2770 doi: 10.1002/acs.2770
![]() |
[27] |
P. G. Park, J. W. Ko, C. Jeong, Reciprocally convex approach to stability of systems with time-varying delays, Automatica, 47 (2011), 235–238. https://doi.org/10.1016/j.automatica.2010.10.014 doi: 10.1016/j.automatica.2010.10.014
![]() |
[28] |
V. N. Phat, Robust stability and stabilizability of uncertain linear hybrid systems with state delays, IEEE Trans. Circ. Syst., 52 (2005), 94–98. http://dx.doi.org/10.1109/TCSII.2004.840115 doi: 10.1109/TCSII.2004.840115
![]() |
[29] |
K. Ramakrishnan, G. Ray, Robust stability criteria for a class of uncertain discrete-time systems with time-varying delay, Appl. Math. Model., 37 (2013), 1468–1479. https://doi.org/10.1016/j.apm.2012.03.045 doi: 10.1016/j.apm.2012.03.045
![]() |
[30] |
Y. Shan, S. Zhong, J. Cui, L. Hou, Y. Li, Improved criteria of delay-dependent stability for discrete-time neural networks with leakage delay, Neurocomputing, 266 (2017), 409–419. https://doi.org/10.1016/j.neucom.2017.05.053 doi: 10.1016/j.neucom.2017.05.053
![]() |
[31] |
Y. Shu, X. Liu, Y. Liu, Stability and passivity analysis for uncertain discrete-time neural networks with time-varying delay, Neurocomputing, 173 (2016), 1706–1714. https://doi.org/10.1016/j.neucom.2015.09.043 doi: 10.1016/j.neucom.2015.09.043
![]() |
[32] |
Q. Song, J. Liang, Z. Wang, Passivity analysis of discrete-time stochastic neural networks with time-varying delays, Neurocomputing, 72 (2009), 1782–1788. https://doi.org/10.1016/j.neucom.2008.05.006 doi: 10.1016/j.neucom.2008.05.006
![]() |
[33] |
S. K. Tadepalli, V. K. R. Kandanvli, Improved stability results for uncertain discrete-time state-delayed systems in the presence of nonlinearities, Trans. Inst. Meas. Control, 38 (2016), 33–43. http://dx.doi.org/10.1177/0142331214562020 doi: 10.1177/0142331214562020
![]() |
[34] |
Y. Tang, J. Fang, M. Xia, X. Gu, Synchronization of Takagi-Sugeno fuzzy stochastic discrete-time complex networks with mixed time-varying delays, Appl. Math. Model., 34 (2010), 843–855. https://doi.org/10.1016/j.apm.2009.07.015 doi: 10.1016/j.apm.2009.07.015
![]() |
[35] |
J. Tian, S. Zhong, Improved delay-dependent stability criterion for neural networks with time-varying delay, Appl. Math. Comput., 217 (2011), 10278–10288. https://doi.org/10.1016/j.amc.2011.05.029 doi: 10.1016/j.amc.2011.05.029
![]() |
[36] |
S. Udpin, P. Niamsup, Robust stability of discrete-time LPD neural networks with time-varying delay, Commun. Nonlinear Sci. Numer. Simul., 14 (2009), 3914–3924. https://doi.org/10.1016/j.cnsns.2008.08.018 doi: 10.1016/j.cnsns.2008.08.018
![]() |
[37] |
X. Wan, M. Wu, Y. He, J. She, Stability analysis for discrete time-delay systems based on new finite-sum inequalities, Inf. Sci., 369 (2016), 119–127. https://doi.org/10.1016/j.ins.2016.06.024 doi: 10.1016/j.ins.2016.06.024
![]() |
[38] |
T. Wang, M. X. Xue, C. Zhang, S. M. Fei, Improved stability criteria on discrete-time systems with time-varying and distributed delays, Int. J. Autom. Comput., 10 (2013), 260–266. https://doi.org/10.1007/s11633-013-0719-8 doi: 10.1007/s11633-013-0719-8
![]() |
[39] |
Z. Wang, Y. Liu, X. Liu, On global asymptotic stability of neural networks with discrete and distributed delays, Phys. Lett. A, 345 (2005), 299–308. https://doi.org/10.1016/j.physleta.2005.07.025 doi: 10.1016/j.physleta.2005.07.025
![]() |
[40] |
T. Wang, C. Zhang, S. Fei, T. Li, Further stability criteria on discrete-time delayed neural networks with distributed delay, Neurocomputing, 111 (2013), 195–203. https://doi.org/10.1016/j.neucom.2012.12.017 doi: 10.1016/j.neucom.2012.12.017
![]() |
[41] |
L. Wu, W. Zheng, Passivity-based sliding mode control of uncertain singular time-delay systems, Automatica, 45 (2009), 2120–2127. https://doi.org/10.1016/j.automatica.2009.05.014 doi: 10.1016/j.automatica.2009.05.014
![]() |
[42] |
M. Wu, C. Peng, J. Zhang, M. Fei, Y. Tian, Further results on delay-dependent stability criteria of discrete systems with an interval time-varying delay, J. Franklin Inst., 354 (2017), 4955–4965. https://doi.org/10.1016/j.jfranklin.2017.05.005 doi: 10.1016/j.jfranklin.2017.05.005
![]() |
[43] |
L. Xie, M. Fu, H. Li, Passivity analysis and passification for uncertain signal processing systems, IEEE Trans. Signal Process., 46 (1998), 2394–2403. http://dx.doi.org/10.1109/78.709527 doi: 10.1109/78.709527
![]() |
[44] |
C. K. Zhang, Y. He, L. Jiang, M. Wu, An improved summation inequality to discrete-time systems with time-varying delay, Automatica, 74 (2016), 10–15. https://doi.org/10.1016/j.automatica.2016.07.040 doi: 10.1016/j.automatica.2016.07.040
![]() |
[45] | X. M. Zhang, Q. L. Han, X. Ge, B. L. Zhang, Delay-variation-dependent criteria on extended dissipativity for discrete-time neural networks with time-varying delay, IEEE Trans. Neur. Net. Lear., 2021, 1–10. http://dx.doi.org/10.1109/TNNLS.2021.3105591 |
1. | K. Sri Raja Priyanka, G. Soundararajan, Ardak Kashkynbayev, G. Nagamani, Co-existence of robust output-feedback synchronization and anti-synchronization of delayed discrete-time neural networks with its application, 2024, 43, 2238-3603, 10.1007/s40314-023-02575-5 | |
2. | K. Sri Raja Priyanka, G. Nagamani, Non-fragile projective synchronization of delayed discrete-time neural networks via generalized weighted summation inequality, 2024, 479, 00963003, 128885, 10.1016/j.amc.2024.128885 | |
3. | Hongjia Sha, Jun Chen, Guangming Zhuang, New Results on Stability and Passivity for Discrete-Time Neural Networks with a Time-Varying Delay, 2024, 0278-081X, 10.1007/s00034-024-02952-3 |
h1 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 12 | 15 | 20 |
Theorem2 | 65 | 64 | 64 | 63 | 62 | 62 | 60 | 58 | 56 | 54 |
h1 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 12 | 15 | 20 |
Theorem2 | 86 | 86 | 87 | 87 | 87 | 87 | 88 | 88 | 90 | 92 |
Methods∖h1= | 6 | 8 | 10 | 15 | 20 |
Theorem 1 [13] | 20 | 21 | 21 | 23 | 26 |
Theorem 2 [14] | 19 | 20 | 21 | 24 | 27 |
Corollary 1 [31] | 20 | 20 | 21 | 24 | 27 |
Theorem 1 [10] | 21 | 21 | 22 | 24 | 27 |
Theorem 2 [10] | 20 | 21 | 22 | 24 | 27 |
Corollary 3.1 [30] | 20 | 22 | 24 | 29 | 34 |
Corollary 2 | 23 | 24 | 25 | 30 | 35 |
[h1,h2] | [2, 7] | [3, 9] | [5, 10] | [6, 12] | [10, 15] |
Gao and Chen [3] | 0.190 | 0.145 | 0.131 | 0.090 | 0.065 |
Huang and Fenh [7] | 0.192 | 0.154 | 0.142 | 0.114 | 1.102 |
Ramakrishnan and Ray [29] | 0.195 | 0.165 | 0.154 | 0.131 | 1.112 |
Wang et al. [40] | 0.205 | 0.172 | 0.161 | 0.138 | - |
Corollary 3 | 0.210 | 0.179 | 0.168 | 0.141 | 1.126 |
h1 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 12 | 15 | 20 |
Theorem4 | 30 | 30 | 28 | 28 | 27 | 27 | 26 | 25 | 24 | 22 |
h1 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 12 | 15 | 20 |
Theorem2 | 65 | 64 | 64 | 63 | 62 | 62 | 60 | 58 | 56 | 54 |
h1 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 12 | 15 | 20 |
Theorem2 | 86 | 86 | 87 | 87 | 87 | 87 | 88 | 88 | 90 | 92 |
Methods∖h1= | 6 | 8 | 10 | 15 | 20 |
Theorem 1 [13] | 20 | 21 | 21 | 23 | 26 |
Theorem 2 [14] | 19 | 20 | 21 | 24 | 27 |
Corollary 1 [31] | 20 | 20 | 21 | 24 | 27 |
Theorem 1 [10] | 21 | 21 | 22 | 24 | 27 |
Theorem 2 [10] | 20 | 21 | 22 | 24 | 27 |
Corollary 3.1 [30] | 20 | 22 | 24 | 29 | 34 |
Corollary 2 | 23 | 24 | 25 | 30 | 35 |
[h1,h2] | [2, 7] | [3, 9] | [5, 10] | [6, 12] | [10, 15] |
Gao and Chen [3] | 0.190 | 0.145 | 0.131 | 0.090 | 0.065 |
Huang and Fenh [7] | 0.192 | 0.154 | 0.142 | 0.114 | 1.102 |
Ramakrishnan and Ray [29] | 0.195 | 0.165 | 0.154 | 0.131 | 1.112 |
Wang et al. [40] | 0.205 | 0.172 | 0.161 | 0.138 | - |
Corollary 3 | 0.210 | 0.179 | 0.168 | 0.141 | 1.126 |
h1 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 12 | 15 | 20 |
Theorem4 | 30 | 30 | 28 | 28 | 27 | 27 | 26 | 25 | 24 | 22 |