This paper concerns the input-to-state stability problem of delayed reaction-diffusion neural networks with multiple impulses. After reformulating the neural-network model in term of an abstract impulsive functional differential equation, the criteria of input-to-state stability are established by the direct estimate of mild solution and an integral inequality with infinite distributed delay. It shows that the input-to-state stability of the continuous dynamics can be retained under certain multiple impulsive disturbance and the unstable continuous dynamics can be stabilised by the multiple impulsive control, if the intervals between the multiple impulses are bounded. The numerical simulation of two examples is given to show the effectiveness of theoretical results.
Citation: Tengda Wei, Xiang Xie, Xiaodi Li. Input-to-state stability of delayed reaction-diffusion neural networks with multiple impulses[J]. AIMS Mathematics, 2021, 6(6): 5786-5800. doi: 10.3934/math.2021342
[1] | Yueli Huang, Jin-E Zhang . Asymptotic stability of impulsive stochastic switched system with double state-dependent delays and application to neural networks and neural network-based lecture skills assessment of normal students. AIMS Mathematics, 2024, 9(1): 178-204. doi: 10.3934/math.2024011 |
[2] | Mei Xu, Bo Du . Dynamic behaviors for reaction-diffusion neural networks with mixed delays. AIMS Mathematics, 2020, 5(6): 6841-6855. doi: 10.3934/math.2020439 |
[3] | Yinjie Qian, Lian Duan, Hui Wei . New results on finite-/fixed-time synchronization of delayed memristive neural networks with diffusion effects. AIMS Mathematics, 2022, 7(9): 16962-16974. doi: 10.3934/math.2022931 |
[4] | Abdulaziz M. Alanazi, R. Sriraman, R. Gurusamy, S. Athithan, P. Vignesh, Zaid Bassfar, Adel R. Alharbi, Amer Aljaedi . System decomposition method-based global stability criteria for T-S fuzzy Clifford-valued delayed neural networks with impulses and leakage term. AIMS Mathematics, 2023, 8(7): 15166-15188. doi: 10.3934/math.2023774 |
[5] | Linni Li, Jin-E Zhang . Input-to-state stability of nonlinear systems with delayed impulse based on event-triggered impulse control. AIMS Mathematics, 2024, 9(10): 26446-26461. doi: 10.3934/math.20241287 |
[6] | Ivanka Stamova, Gani Stamov . Impulsive control strategy for the Mittag-Leffler synchronization of fractional-order neural networks with mixed bounded and unbounded delays. AIMS Mathematics, 2021, 6(3): 2287-2303. doi: 10.3934/math.2021138 |
[7] | Li Wan, Qinghua Zhou, Hongbo Fu, Qunjiao Zhang . Exponential stability of Hopfield neural networks of neutral type with multiple time-varying delays. AIMS Mathematics, 2021, 6(8): 8030-8043. doi: 10.3934/math.2021466 |
[8] | Yanshou Dong, Junfang Zhao, Xu Miao, Ming Kang . Piecewise pseudo almost periodic solutions of interval general BAM neural networks with mixed time-varying delays and impulsive perturbations. AIMS Mathematics, 2023, 8(9): 21828-21855. doi: 10.3934/math.20231113 |
[9] | Aziz Belmiloudi . Time-varying delays in electrophysiological wave propagation along cardiac tissue and minimax control problems associated with uncertain bidomain type models. AIMS Mathematics, 2019, 4(3): 928-983. doi: 10.3934/math.2019.3.928 |
[10] | Shuting Chen, Ke Wang, Jiang Liu, Xiaojie Lin . Periodic solutions of Cohen-Grossberg-type Bi-directional associative memory neural networks with neutral delays and impulses. AIMS Mathematics, 2021, 6(3): 2539-2558. doi: 10.3934/math.2021154 |
This paper concerns the input-to-state stability problem of delayed reaction-diffusion neural networks with multiple impulses. After reformulating the neural-network model in term of an abstract impulsive functional differential equation, the criteria of input-to-state stability are established by the direct estimate of mild solution and an integral inequality with infinite distributed delay. It shows that the input-to-state stability of the continuous dynamics can be retained under certain multiple impulsive disturbance and the unstable continuous dynamics can be stabilised by the multiple impulsive control, if the intervals between the multiple impulses are bounded. The numerical simulation of two examples is given to show the effectiveness of theoretical results.
Over the past few decades, neural networks have been widely applied in the field of smart grid, secure communication, machine learning, among many others [1,2,3,4,5,6]. Such applications heavily depend on their dynamical behaviors such as stability, synchronization, periodicity, passivity, to name just a few. Among them, the input-to-state stability (ISS), which is originally developed by E. D. Sontag in the late 1980s [7] and measures the influence of external input to stability, is of comparable significance in the dynamical analysis of nonlinear systems including neural networks. Extensions of ISS have also been proposed for various kinds of nonlinear systems and ignited plenty of valuable works [8,9,10,11,12,13,14,15].
In the ISS analysis of neural networks, the time delays are usually involved in neural networks, since the finite speed of signal transmission and amplifier switching inevitably causes the hysteresis of neural networks. There are usually two kinds of time delays considered in neural networks: the time-varying delays and the infinite distributed delays [16,17,18,19,20,21]. For instance, the exponential ISS of recurrent neural networks with multiple time-varying delays was studied by the Lyapunov method in [18], whose results were further extended to the stochastic case with both time-varying delays and infinite distributed delays [19]. In [20], the pth moment exponential ISS of stochastic recurrent neural networks with time-varying delay was investigated by the vector Lyapunov function to reduce the conservatism caused by the scalar Lyapunov function used in previous literature.
Recently, the reaction-diffusion is introduced in neural-network models because the electrons sometimes have the diffusive shift trajectory in nonuniform electromagnetic field. Different from the delayed neural networks (DNNs) without reaction-diffusion, the delayed reaction-diffusion neural networks (DRDNNs) are described by PDEs because their dynamics depends on both the spatial derivative and the time derivative. Therefore, the dynamical analysis of DRDNNs has attracted the interest of plenty of researchers and the extension of ISS has also been carried out from DNNs to DRDNNs [22,23,24,25,26,27,28,29,30]. In addition, the ISS is used not only for DRDNNs, but is in fact a key concept in robust control of infinite-dimensional systems, with the expectation that ISS will enable similar advances in the control theory of infinite-dimensional systems as it has for finite-dimensional systems. See [31,32] and the references therein.
On the other hand, the impulsive effects may occur in the hardware implementation of neural networks since the nodes may be shocked by defective connections, sudden attacks, and abrupt changes [24], so the impulsive delayed neural networks (IDNNs) have been massively studied [33,34,35,36,37] where the impulses are classified into two kinds: the stabilising impulses which force the trajectory of neural network into desirable pattern, and the destabilising impulses which bring fluctuation to neural networks. However, the multiple impulses containing both stabilising impulses and destabilising impulses are more elegant to model the instantaneous shocks of the neural networks. Even though some ISS properties of impulsive nonlinear system with multiple impulses are unveiled in recent literature [38,39], the ISS of DNNs with multiple impulses, is rarely investigated, not to mention the ISS of DRDNNs with multiple impulses, because the multiple impulses are difficult to handle with the infinite distributed delays included in neural networks.
Motivated by the above discussion, the aim of this paper is to establish the ISS criteria of DRDNNs with multiple impulses. The contributions lie in the following aspects: (1) The multiple impulses, infinite distributed delays, and reaction-diffusion are considered simultaneously in neural-network model; (2) The ISS conditions of the DRDNNs with multiple impulses are obtained by the direct estimate of mild solution and an integral inequality; (3) It show that the ISS property of continuous dynamics can be retained under certain multiple impulsive disturbance and the unstable continuous dynamics can be stabilised by multiple impulsive control, if the intervals between the multiple impulses are bounded.
The remainder of this paper is organized as follows. Section 2 introduces the neural-network model and preliminaries. Section 3 gives the sufficient conditions for ISS of the DRDNNs with multiple impulses. Section 4 presents the numerical simulation of two examples. Finally, the conclusions are drawn in Section 5.
In this paper, unless otherwise specified, the following notations are used. ˉn={1,2,⋯,n} and N={1,2,3,⋯}. For a,b∈R, a∧b denotes the minimum of a and b. L≜(L2(O))n and L2(O) is a Hilbert space with inner product ⟨z1,z2⟩=∫Oz1(x)z2(x)dx and norm ‖z‖2=⟨z,z⟩, where O={x|x=(x1,⋯,xw)T,|xj|≤ρj,ρj∈R+,j∈ˉw}. Here, we also use the same symbol ‖⋅‖ to denote the usual norm of linear bounded operators from L to L. H≜(H)n where H={z∈L2(O):(∂z)/(∂xi),(∂2z)/(∂xi∂xj)∈L2(O), z(t,x)|x∈∂O=0, i,j=1,2,⋯,w}. Let F0={t1,t2,t3,⋯} be the sequences of impulse times satisfying 0=t0<t1<t2<⋯<tk<⋯to prevent the occurrence of accumulation points. For ηˊ and \grave{\eta} > 0 , \mathcal{F}^+(\grave{\eta}) , \mathcal{F}_-(\acute{\eta}) , and \mathcal{F}(\acute{\eta}, \grave{\eta}) denote the sets of admissible sequences of impulse times in \mathcal{F}_0 satisfying 0 < t_k-t_{k-1}\le \grave{\eta} , \acute{\eta} \le t_k-t_{k-1} < \infty , and \acute{\eta} \le t_k-t_{k-1}\le \grave{\eta} for any k\in\mathbb{N} , respectively. PC(R, J) represents the space of functions f: R\to J which are continuous on (t_{k-1}, t_k) for k\in\mathbb{N} and f(t^+) = f(t) for \forall t\in R where R = \mathbb{R} or R = \mathbb{R}_+ and J is an Euclidean space or a Hilbert space. \mathcal{U}\triangleq PC(\mathbb{R}_+, \mathcal{H}\cap \mathcal{L}) . \mathcal{PC} represents the space of functions f: (-\infty, 0]\to\mathcal{L}\cap\mathcal{H} which have at most a finite number of jump discontinuities on (-\infty, 0] and f(t^+) = f(t) for \forall t\in (-\infty, 0] . \mathcal{PC}^b = \{f|f\in \mathcal{PC} \text{ and } f(t) \text{ is bounded on } (-\infty, 0]\} with norm \| f \|_{\mathcal{PC}^b} = \sup_{-\infty < t \le 0}\| f(t) \| . A function z(t, x) is said to be piecewise continuous if z(t, x) is piecewise continuous for all x\in\mathcal{O} . \mathcal{K} represents the class of continuous strictly increasing function \kappa:\mathbb{R}_+\to\mathbb{R}_+ with \kappa(0) = 0 . \mathcal{K}_\infty is the subset of \mathcal{K} functions that are unbounded. A function \beta is said to belong to the class of \mathcal{KL} , if \beta(\cdot, t) is of class \mathcal{K} for each fixed t > 0 and \beta(s, t) decreases to 0 as t\to+\infty for each fixed s\ge0 .
Consider the following DRDNNs with multiple impulses
\begin{equation} \left\{ \begin{array}{l} \frac{\partial \hat{z}_i(t,x)}{\partial t} = d_i \sum\limits_{j = 1}^w \frac{\partial^2 \hat{z}_i(t,x)}{\partial x_j^2}-a_i \hat{z}_i(t,x)+\sum\limits_{j = 1}^n b_{ij}\hat{f}_j(\hat{z}_j(t,x))+\sum\limits_{j = 1}^n p_{ij}\hat{f}_j(\hat{z}_j(t-\tau,x)) \\ \qquad\qquad +\sum\limits_{j = 1}^n q_{ij}\int_{0}^{+\infty} k(r)\hat{f}_j(\hat{z}_j(t-r,x))dr+\hat{u}_i(t,x),\ t\ge 0,\ t\ne t_k,\\ \hat{z}_i(t_k,x) = (1+c_k)\hat{z}_i(t_k^-,x) +\hat{u}_i(t_k^-,x),\ k\in\mathbb{N}, \end{array} \right. \end{equation} | (2.1) |
where x\in\mathcal{O} , i\in\bar{n} , \hat{z}_i(t, x) is the state variable of the i th neuron at time t and space x , d_{i} represents the positive transmission diffusion coefficient of the i th neuron, \sum_{j = 1}^w\frac{\partial^2 \hat{z}_i(t, x)}{\partial x_j^2} represents the reaction-diffusion term, a_i > 0 stands for the recovery rate, b_{ij} > 0 , p_{ij} > 0 , and q_{ij} > 0 are the connection weight strengths of the j th neuron on the i th neuron, \hat{f}_j stands for the activation function, \hat{u}_i is the external input, i, j\in\bar{n} . The delay kernel k:[0, +\infty)\to\mathbb{R}_+ is a nonnegative continuous function satisfying that there exists a positive constant \lambda^* such that k(s)\le e^{-\lambda^*s} for s\ge0 . Then, the neural-network model (2.1) can be rewritten in terms of the following vector form
\begin{equation} \left\{ \begin{array}{l} \frac{\partial \hat{z}(t,x)}{\partial t} = D\Delta \hat{z}(t,x)-A \hat{z}(t,x)+B\hat{f}(\hat{z}(t,x))+P\hat{f}(\hat{z}(t-\tau,x)) \\ \qquad\qquad +Q\int_{0}^{+\infty} k(r)\hat{f}(\hat{z}(t-r,x))dr+\hat{u}(t,x),\ t\in[t_{k-1},t_k),\\ \hat{z} (t_k,x) = (1+c_k)\hat{z}(t_k^-,x)+\hat{u}(t_k^-,x),\ k\in\mathbb{N}, \end{array} \right. \end{equation} | (2.2) |
where \hat{z} = (\hat{z}_1, \hat{z}_2, \cdots, \hat{z}_n)^T , \Delta = \sum_{j = 1}^w\frac{\partial^2}{\partial x_j^2} , D = \text{diag}(d_1, d_2, \cdots, d_n) , A = \text{diag}(a_1, a_2, \cdots, a_n) , B = (b_{ij})_{n\times n} , P = (p_{ij})_{n\times n} , Q = (q_{ij})_{n\times n} , \hat{f}(z) = (\hat{f}_1(z_1), \hat{f}_2(z_2), \cdots, \; \hat{f}_n(z_n))^T , and \hat{u} = (\hat{u}_1, \hat{u}_2, \cdots, \hat{u}_n)^T . The Dirichlet boundary condition and initial condition, associated with (2.1) or (2.2), are given by
\begin{align} &\hat{z}(t,x)|_{x\in\partial\mathcal{O}} = 0,\ t\in\mathbb{R}, \end{align} | (2.3) |
\begin{align} \hat{z}(t,x)& = \hat{\phi}(t,x)\in\mathcal{PC}^b,\ t\le 0,x\in\mathcal{O}. \end{align} | (2.4) |
As standard hypotheses, we assume that
(H1) there exist positive constants l_i such that, for \forall\hat{z}_1, \hat{z}_2\in\mathbb{R} , i\in\bar{n} ,
\begin{equation*} |\hat{f}_i(\hat{z}_1)-\hat{f}_i(\hat{z}_2)|\le l_i |\hat{z}_1-\hat{z}_2|; \end{equation*} |
(H2) there exist constants N\in\mathbb{N} and \sigma_k > 0 , k\in\mathbb{N} such that \sigma_{k+N} = \sigma_k and |1+c_k|\le\sigma_k .
In this paper, we always assume that (H1) and (H2) are satisfied. The sets of stabilising strengths and destabilising strengths are denoted by \{\grave{\sigma}_i\}_{i = 1}^p and \{\acute{\sigma}_i\}_{i = 1}^{N-p} , respectively. Define a linear operator \mathscr{D} from \mathcal{H} to \mathcal{L} by \mathscr{D}\hat{z} = D\Delta\hat{z}-A\hat{z} , then \mathscr{D} is an infinitesimal generator of a strongly continuous C_0 -semigroup S(t) [40]. Furthermore, the neural networks (2.2)–(2.4) can be reformulated in terms of the following abstract impulsive functional differential equation
\begin{equation} \left\{ \begin{array}{l} \frac{dz(t)}{dt} = \mathscr{D}z(t)+Bf(z(t))+Pf(z(t-\tau))\\ \qquad +Q\int_{0}^{+\infty} k(r)f(z(t-r))dr+u(t),\ t\in[t_{k-1},t_k),\\ z(t_k) = (1+c_k) z(t_k^-) + u(t_k^-),\ k\in\mathbb{N},\\ z_0 = \phi\in\mathcal{PC}^b, \end{array} \right. \end{equation} | (2.5) |
where z(t) = \hat{z}(t, x)\in\mathcal{L} , f:\mathcal{L}\to\mathcal{L} , u(t) = \hat{u}(t, x)\in\mathcal{L} , and z_0(\theta) = \phi(\theta) = \hat{\phi}(\theta, x)\in\mathcal{PC}^b , \theta\in(-\infty, 0] .
Definition 1. An \mathcal{L} -valued functional z(t) = z(t)(x, \phi, u) is said to be a mild solution of (2.5), if z(t) satisfies the following equation
\begin{equation} \begin{split} z(t) = \ & S(t)\phi(0)+\int_{0}^t S(t-s)Bf(z(s))ds+\int_{0}^t S(t-s)Pf(z(s-\tau))ds\\ & +\int_{0}^t S(t-s)Q\int_{0}^{+\infty} k(r)f(z(s-r))drds+\int_{0}^t S(t-s)u(s)ds+\sum\limits_{t_k\le t} S(t-t_k)(c_kz(t_k^-)+u(t_k^-)). \end{split} \end{equation} | (2.6) |
Remark 1. From Lemma 2.2 and Theorem 5.3 of [41], we can obtain the local existence and uniqueness of mild solution under (H1) and (H2), and the mild solution is continuous between the impulse intervals. If the system (2.5) is input-to-state stable, the mild solution can not explode in finite time, which implies the global existence and uniqueness.
Definition 2 ([9]). For a given sequence \{t_k\}_{k\in\mathbb{N}} of impulse times, the DRDNNs with multiple impulses (2.5) are called input-to-state stable, if there exist functions \beta\in\mathcal{KL} and \gamma\in\mathcal{K}_\infty such that \forall \phi\in\mathcal{PC}^b , \forall u\in\mathcal{U} it holds that
\begin{equation*} \|z(t)\| \le \beta(\|\phi\|_{\mathcal{PC}^b},t)+\sup\limits_{s\le t} \gamma(\|u(s)\|). \end{equation*} |
The DRDNNs with multiple impulses (2.5) are called uniformly input-to-state stable (UISS) over a given set \mathcal{F} of admissible sequences of impulse times if it is input-to-state stable for every sequence in \mathcal{F} with \beta and \gamma independent of the choice of the sequence from the class \mathcal{F} .
Lemma 1 ([23]). Let \mathcal{O} be a cube |x_j|\le \rho_j ( j\in\bar{w} ) and let h(x) be a real-valued function belonging to C^1(\mathcal{O}) , which vanishes on the boundary \partial\mathcal{O} , that is, h(x)|_{\partial\mathcal{O}} = 0 . Then
\begin{equation} \int_{\mathcal{O}} h^2(x)dx \le \rho_j^2 \int_{\mathcal{O}}\bigg(\frac{\partial h(x)}{\partial x_j}\bigg)^2 dx,\ j\in\bar{w}. \end{equation} | (2.7) |
Lemma 2. Assume that (H1) holds. Then, the mild solution of (2.5) can be represented by
\begin{equation} \begin{split} z(t) = \ & T(t,0)\phi(0)+\int_{0}^t T(t,s)Bf(z(s))ds+\int_{0}^t T(t,s)Pf(z(s-\tau))ds\\ & +\int_{0}^t T(t,s)Q\int_{0}^{+\infty} k(r)f(z(s-r))drds+\int_{0}^t T(t,s)u(s)ds+\sum\limits_{t_k\le t} T(t,t_k)u(t_k^-),\ t\ge 0, \end{split} \end{equation} | (2.8) |
where
\begin{equation*} T(t,s) = \left\{ \begin{array}{ll} S(t-s), & t,s\in[t_{k-1},t_k),\\ (1+c_k)S(t-s), & t_{k-1}\le s < t_k\le t < t_{k+1},\\ \Pi_{j = i}^k (1+c_j)S(t-s), & t_{i-1}\le s < t_i < t_k\le t < t_{k+1}. \end{array} \right. \end{equation*} |
Proof. The proof is analogous to the proof of Lemma 2.2 in [43] so as to be omitted.
Lemma 3. Consider the following abstract Cauchy problem
\begin{equation} \left\{ \begin{array}{l} \frac{dz(t)}{dt} = \mathfrak{D}z(t),\ t\ge 0,\\ z(0) = \psi, \end{array} \right. \end{equation} | (2.9) |
where \psi\in\mathcal{H} . Then, the strongly continuous semigroup S(t) generated by \mathfrak{D} is contractive and satisfies \|S(t)\|^2\le e^{-2\vartheta t} for t\ge0 where \vartheta = \min_i\{d_i\}\sum_{j = 1}^w(1/\rho_j^2)+\min_i\{a_i\} .
Proof. Recalling that the solution of (2.9) is z(t) = S(t)\psi . Combining the Gaussian theorem, the homogeneous Dirichlet boundary condition, and Lemma 1, we obtain
\begin{equation} \begin{split} \langle z,\mathfrak{D}z\rangle & = \sum\limits_{i = 1}^n d_{i}\sum\limits_{j = 1}^w \int_{\mathcal{O}}z_i(t,x)\frac{\partial^2 z_i(t,x)}{\partial x_j^2}dx-\sum\limits_{i = 1}^n a_i\int_{\mathcal{O}}(z_i(t,x))^2dx\\ & = -\sum\limits_{i = 1}^n d_{i}\sum\limits_{j = 1}^w \int_{\mathcal{O}}\bigg(\frac{\partial z_i(t,x)}{\partial x_j}\bigg)^2dx-\sum\limits_{i = 1}^n a_i\int_{\mathcal{O}}(z_i(t,x))^2dx\\ & \le -\sum\limits_{i = 1}^n\sum\limits_{j = 1}^w \frac{d_{i}}{\rho_k^2}\int_{\mathcal{O}}(z_i(t,x))^2dx-\sum\limits_{i = 1}^n a_i\int_{\mathcal{O}}(z_i(t,x))^2dx \le -\vartheta\|z\|^2, \end{split} \end{equation} | (2.10) |
which implies that
\begin{equation} \frac{d\|z(t)\|^2}{dt} = 2\langle z(t),\mathfrak{D}z(t)\rangle\le-2\vartheta\|z(t)\|^2. \end{equation} | (2.11) |
Therefore, \|z(t)\|^2\le e^{-2\vartheta t}\|\psi\|^2 for all \psi\in\mathcal{H} . By the density of \mathcal{H} in \mathcal{L} [Theorem 1.2, 42], the result holds for all \psi\in\mathcal{L} so as to complete the proof.
Lemma 4 ([43]). If \acute{\eta}\le t_k-t_{k-1} < \infty for \forall k\in\mathbb{N} , it holds that \sum_{t_k\le t} e^{-c(t-t_k)} < \frac{1}{1-e^{-c\acute{\eta}}} , where c > 0 and t\ge t_1 .
In this section, the ISS of the DRDNNs with multiple impulses will be investigated by the direct estimate of the mild solution. First, let us consider the following integral inequality with infinite distributed delay:
\begin{equation} \left\{ \begin{array}{l} v(t)\le \rho_1 e^{-c t}+\rho_2\int_0^t e^{-c(t-s)}v(s)ds+\rho_3\int_0^t e^{-c(t-s)}v(s-\tau)ds\\ \qquad\quad +\rho_4\int_0^t e^{-c(t-s)}\int_0^{+\infty}k(r)v(s-r)drds+\rho_5\int_0^t e^{-c(t-s)}w(s)ds\\ \qquad\quad +\rho_6\sum\limits_{t_k\le t} e^{-c(t-t_k)}w(t_k^-),\ t\ge0,\\ v(t)\le M,\ t\le 0, \end{array} \right. \end{equation} | (3.1) |
where v\in PC(\mathbb{R}, \mathbb{R}_+) , w\in PC(\mathbb{R}_+, \mathbb{R}_+) , c > 0 , \rho_1\ge M > 0 , \rho_i > 0 , i\in\bar{6} , and \acute{\eta}\le t_k-t_{k-1} < \infty for \forall k\in\mathbb{N} .
Lemma 5. If \mu = c-\rho_2-\rho_3-\rho_4/\lambda^* > 0 , then there exists constant 0 < \lambda < c\land \lambda^* such that \Theta(\lambda) < 1 and
\begin{equation} v(t) < Ne^{-\lambda t}+\kappa\sup\limits_{0\le s\le t} w(s), \end{equation} | (3.2) |
where \kappa = \frac{c}{\mu}(\frac{\rho_5}{c}+\frac{\rho_6}{1-e^{-c\acute{\eta}}}) , N = \frac{2\rho_1}{1-\Theta(\lambda)}+M , and
\begin{equation} \Theta(\lambda) = \frac{\rho_2}{c-\lambda}+\frac{\rho_3}{c-\lambda}e^{\lambda\tau}+\frac{\rho_4}{(c-\lambda)(\lambda^*-\lambda)}. \end{equation} | (3.3) |
Proof. Let us consider the function \Theta(a) where a\in[0, c\land\lambda^*) . Since \mu > 0 , \Theta(0) < 1 and \Theta(a) converges to positive infinity or a constant as a\to c\land\lambda^* . Additionally, \Theta(a) is monotonous and continuous with respect to a . Thus, there exists \lambda\in(0, c\land\lambda^*) such that \Theta(\lambda) < 1 , which further indicates that
\begin{align} &\frac{\rho_1}{N}+\frac{\rho_2}{c-\lambda}+\frac{\rho_3}{c-\lambda}e^{\lambda\tau}+\frac{\rho_4}{(c-\lambda)(\lambda^*-\lambda)} = \frac{\rho_1}{N}+\Theta(\lambda)\\ & = \frac{\rho_1}{\frac{2\rho_1}{1-\Theta(\lambda)}+M}+\Theta(\lambda) < \frac{\rho_1}{\frac{2\rho_1}{1-\Theta(\lambda)}}+\Theta(\lambda) = \frac{1-\Theta(\lambda)}{2}+\Theta(\lambda) = \frac{1+\Theta(\lambda)}{2} < 1. \end{align} | (3.4) |
If (3.2) is not true, there exists t^* > 0 such that
\begin{equation} v(t^*)\ge Ne^{-\lambda t^*}+\kappa\sup\limits_{0\le s\le t^*}w(s), \end{equation} | (3.5) |
and
\begin{equation} v(t) < Ne^{-\lambda t}+\kappa\sup\limits_{0\le s\le t}w(s),\ t < t^*. \end{equation} | (3.6) |
However, it follows from integral inequality (3.1) and Lemma 4 that
\begin{align} v(t^*) & \le \rho_1 e^{-c t^*}+\rho_2\int_0^{t^*} e^{-c(t^*-s)}v(s)ds+\rho_3\int_0^{t^*} e^{-c(t^*-s)}v(s-\tau)ds\\ & \quad +\rho_4\int_0^{t^*} e^{-c(t^*-s)}\int_0^{+\infty}k(r)v(s-r)drds+\rho_5\int_0^{t^*} e^{-c(t^*-s)}w(s)ds\\ & \quad +\rho_6\sum\limits_{t_k\le t^*} e^{-c(t^*-t_k)}w(t_k^-) \triangleq \sum\limits_{i = 1}^6 I_i(t^*). \end{align} | (3.7) |
From (3.6), we get that
\begin{align} I_2(t^*) & \le\rho_2\int_0^{t^*} e^{-c(t^*-s)}\Big[Ne^{-\lambda s}+\kappa\sup\limits_{0\le p\le s}w(p)\Big]ds\\ & \le \rho_2N\int_0^{t^*} e^{-c(t^*-s)}e^{-\lambda s}ds+\rho_2\kappa\sup\limits_{0\le p\le t^*}w(p)\int_0^{t^*} e^{-c(t^*-s)}ds\\ & \le \frac{\rho_2}{c-\lambda}Ne^{-\lambda t^*}+\frac{\rho_2\kappa}{c}\sup\limits_{0\le s\le t^*}w(s), \end{align} | (3.8) |
\begin{align} I_3(t^*) & \le\rho_3\int_0^{t^*} e^{-c(t^*-s)}\Big[Ne^{-\lambda(s-\tau)}+\kappa\sup\limits_{0\le p\le s-\tau}w(p)\Big]ds\\ & \le \rho_3e^{\lambda\tau}N\int_0^{t^*} e^{-c(t^*-s)}e^{-\lambda s}ds+\rho_3\kappa\sup\limits_{0\le p\le t^*}w(p)\int_0^{t^*} e^{-c(t^*-s)}ds\\ & \le \frac{\rho_3}{c-\lambda}e^{\lambda\tau}Ne^{-\lambda t^*}+\frac{\rho_3\kappa}{c}\sup\limits_{0\le s\le t^*}w(s), \end{align} | (3.9) |
\begin{equation} I_5(t^*) \le \rho_5\int_{0}^{t^*}e^{-c(t^*-s)}ds\sup\limits_{0\le s\le t^*}w(s)\le\frac{\rho_5}{c}\sup\limits_{0\le s\le t^*}w(s). \end{equation} | (3.10) |
From Cauchy-Schwarz inequality, we obtain
\begin{align} I_4(t^*) & \le\rho_4\int_0^{t^*} e^{-c(t^*-s)}\int_0^{+\infty}k(r)\Big[Ne^{-\lambda(s-r)}+\kappa\sup\limits_{0\le p\le s-r}w(p)\Big]drds\\ & \le \rho_4 N \int_0^{+\infty}k(r)e^{\lambda r}dr\int_0^{t^*} e^{-c(t^*-s)}e^{-\lambda s}ds+\rho_4\kappa\sup\limits_{0\le p\le t^*}w(p)\int_0^{t^*} e^{-c(t^*-s)}ds\\ & \le \frac{\rho_4}{c-\lambda} Ne^{-\lambda t^*}\int_0^{+\infty}k(r)e^{\lambda r}dr+\frac{\rho_4\kappa}{c}\sup\limits_{0\le s\le t^*}w(s)\\ & \le \frac{\rho_4}{(c-\lambda)(\lambda^*-\lambda)} Ne^{-\lambda t^*}+\frac{\rho_4\kappa}{c}\sup\limits_{0\le s\le t^*}w(s). \end{align} | (3.11) |
From Lemma 4, we have
\begin{equation} I_6(t^*) \le \rho_6\sum\limits_{t_k\le t^*} e^{-c(t^*-t_k)}\sup\limits_{0\le s\le t^*}w(s)\le \frac{\rho_6}{1-e^{-c\acute{\eta}}}\sup\limits_{0\le s\le t^*}w(s). \end{equation} | (3.12) |
Combining (3.4) and (3.7)–(3.12), we obtain that
\begin{align} v(t^*) \le \Big(\frac{\rho_1}{N}+\Theta(\lambda)\Big)Ne^{-\lambda t^*}+\Big(\frac{(\rho_2+\rho_3+\rho_4)\kappa}{c}+\frac{\rho_5}{c}+\frac{\rho_6}{1-e^{-c\acute{\eta}}}\Big)\sup\limits_{0\le s\le t^*}w(s) < Ne^{-\lambda t^*}+\kappa\sup\limits_{0\le s\le t^*}w(s), \end{align} | (3.13) |
which contradicts (3.5) so as to complete the proof.
Theorem 1. Assume that \mu = \delta^2-6n\chi\sum_{i = 1}^n\sum_{j = 1}^n(|b_{ij}|^2+|p_{ij}|^2+|q_{ij}|^2/\lambda^*)l_j^2 > 0 , where \delta = \vartheta-\frac{1}{N}\sum_{i = 1}^p\frac{\ln\grave{\sigma}_i}{\grave{\eta}}-\frac{1}{N}\sum_{i = 1}^{N-p}\frac{\ln\acute{\sigma}_i}{\acute{\eta}} > 0 and \chi = \prod_{i = 1}^{N-p}\acute{\sigma}_i/\prod_{i = 1}^p\grave{\sigma}_i . Then, the DRDNNs with multiple impulses (2.5) are UISS over the class \mathcal{F}(\acute{\eta}, \grave{\eta}) .
Proof. From inequality (\sum_{i = 1}^n a_i)^2 \; \le n\sum_{i = 1}^n a_i^2 and Lemma 2, the moment of mild solution is estimated by
\begin{align} \|z(t)\|^2 & \le 6\bigg(\|T(t,0)\phi(0)\|^2+\|\int_{0}^t T(t,s)Bf(z(s))ds\|^2+\|\int_{0}^t T(t,s)Pf(z(s-\tau))ds\|^2\\ &\quad +\|\int_{0}^t T(t,s)Q\int_{0}^{+\infty} k(r)f(z(s-r))drds\|^2+\|\int_{0}^t T(t,s)u(s)ds\|^2+\|\sum\limits_{t_k\le t} T(t,t_k)u(t_k^-)\|^2\bigg)\\ & \triangleq 6\sum\limits_{i = 1}^6 \Gamma_i(t). \end{align} | (3.14) |
It follows from (H2), Lemma 3, and the class \mathcal{F}(\acute{\eta}, \grave{\eta}) that
\begin{equation} \|T(t,s)\|^2\le \prod\limits_{s < t_k\le t}(1+c_k)^2\|S(t-s)\|^2 \le \prod\limits_{s < t_k\le t}\sigma_k^2e^{-2\vartheta(t-s)}\le \chi\prod\limits_{i = 1}^p \grave{\sigma}_i^{\frac{t-s}{\grave{\eta}}\cdot\frac{2}{N}}\prod\limits_{i = 1}^{N-p} \acute{\sigma}_i^{\frac{t-s}{\acute{\eta}}\cdot\frac{2}{N}}e^{-2\vartheta(t-s)} \le \chi e^{-2\delta(t-s)}. \end{equation} | (3.15) |
Combining the Cauchy-Schwarz inequality, it yields
\begin{equation} \Gamma_1(t) \le \chi e^{-2\delta t}\|\phi\|_{\mathcal{PC}^b}^2 \le \chi e^{-\delta t}\|\phi\|_{\mathcal{PC}^b}^2, \end{equation} | (3.16) |
\begin{align} \Gamma_2(t) & = \sum\limits_{i = 1}^n \|\sum\limits_{j = 1}^n\int_{0}^t T(t,s)b_{ij}\hat{f}_j(\hat{z}_j(s,x))ds\|^2\\ & \le n\sum\limits_{i = 1}^n \sum\limits_{j = 1}^n \Big(\int_{0}^t \|T(t,s)b_{ij}\hat{f}_j(\hat{z}_j(s,x))\|ds\Big)^2\\ & \le n\chi \sum\limits_{i = 1}^n \sum\limits_{j = 1}^n \Big(\int_{0}^t e^{-\delta(t-s)}|b_{ij}|l_j\|\hat{z}_j(s,x))\|ds\Big)^2\\ & = n\chi \sum\limits_{i = 1}^n \sum\limits_{j = 1}^n \Big(\int_{0}^t e^{-\frac{\delta}{2}(t-s)}e^{-\frac{\delta}{2}(t-s)}|b_{ij}|l_j\|\hat{z}_j(s,x))\|ds\Big)^2\\ & \le n\chi\sum\limits_{i = 1}^n \sum\limits_{j = 1}^n |b_{ij}|^2 l_j^2\int_{0}^t \big(e^{-\frac{\delta}{2}(t-s)}\big)^2ds\int_{t_0}^t \big(e^{-\frac{\delta}{2}(t-s)}\big)^2\|\hat{z}_j(s,x))\|^2ds\\ & \le \frac{n\chi}{\delta}\sum\limits_{i = 1}^n \sum\limits_{j = 1}^n |b_{ij}|^2 l_j^2\int_{0}^t e^{-\delta(t-s)}\|z(s))\|^2ds, \end{align} | (3.17) |
\begin{align} \Gamma_3(t) & = \sum\limits_{i = 1}^n \|\sum\limits_{j = 1}^n\int_{0}^t T(t,s)p_{ij}\hat{f}_j(\hat{z}_j(s-\tau,x))ds\|^2\\ & \le n\sum\limits_{i = 1}^n \sum\limits_{j = 1}^n \Big(\int_{0}^t \|T(t,s)p_{ij}\hat{f}_j(\hat{z}_j(s-\tau,x))\|ds\Big)^2\\ & \le n\chi \sum\limits_{i = 1}^n \sum\limits_{j = 1}^n \Big(\int_{0}^t e^{-\delta(t-s)}|p_{ij}|l_j\|\hat{z}_j(s-\tau,x))\|ds\Big)^2\\ & \le n\chi\sum\limits_{i = 1}^n \sum\limits_{j = 1}^n |p_{ij}|^2 l_j^2\int_{0}^t e^{-\delta(t-s)}ds\int_{t_0}^t e^{-\delta(t-s)}\|\hat{z}_j(s-\tau,x))\|^2ds\\ & \le \frac{n\chi}{\delta}\sum\limits_{i = 1}^n \sum\limits_{j = 1}^n |p_{ij}|^2 l_j^2\int_{0}^t e^{-\delta(t-s)}\|z(s-\tau)\|^2ds, \end{align} | (3.18) |
\begin{align} \Gamma_4(t) & \le \frac{n\chi}{\delta}\sum\limits_{i = 1}^n \sum\limits_{j = 1}^n |q_{ij}|^2 l_j^2\int_{0}^t e^{-\delta(t-s)}\Big(\int_0^{+\infty}k(r)\|\hat{z}_j(s-r,x)\|dr\Big)^2ds\\ & = \frac{n\chi}{\delta}\sum\limits_{i = 1}^n \sum\limits_{j = 1}^n |q_{ij}|^2 l_j^2\int_{0}^t e^{-\delta(t-s)}\Big(\int_0^{+\infty}(k(r))^{\frac{1}{2}}(k(r))^{\frac{1}{2}}\|\hat{z}(s-r,x)\|dr\Big)^2ds\\ & \le \frac{n\chi}{\delta}\sum\limits_{i = 1}^n \sum\limits_{j = 1}^n |q_{ij}|^2 l_j^2\int_{0}^t e^{-\delta(t-s)}\int_0^{+\infty}k(r)dr\int_0^{+\infty}k(r)\|\hat{z}(s-r,x)\|^2drds\\ & \le \frac{n\chi}{\delta}\sum\limits_{i = 1}^n \sum\limits_{j = 1}^n |q_{ij}|^2 l_j^2\int_{0}^t e^{-\delta(t-s)}\int_0^{+\infty}k(r)\|z(s-r)\|^2drds, \end{align} | (3.19) |
\begin{align} \Gamma_5(t) & = \sum\limits_{i = 1}^n \|\int_{0}^t T(t,s)\hat{u}_i(s,x)ds\|^2 \le \sum\limits_{i = 1}^n \Big(\int_{0}^t \|T(t,s)\hat{u}_i(s,x)\|ds\Big)^2 \le \chi\sum\limits_{i = 1}^n \sum\limits_{j = 1}^n \Big(\int_{0}^t e^{-\delta(t-s)}\|\hat{u}_i(s,x)\|ds\Big)^2\\ & \le \chi\sum\limits_{i = 1}^n \int_{0}^t e^{-\delta(t-s)}ds\int_{0}^t e^{-\delta(t-s)}\|\hat{u}_i(s,x)\|^2ds = \frac{\chi}{\delta}\int_{0}^t e^{-\delta(t-s)}\|u(s)\|^2ds. \end{align} | (3.20) |
Similarly, it follows from Lemma 4 that
\begin{align} \Gamma_6(t) & \le \sum\limits_{i = 1}^n \|\sum\limits_{t_k\le t}T(t,t_k)\hat{u}_i(t_k^-)\|^2 \le \sum\limits_{i = 1}^n \Big(\sum\limits_{t_k\le t}\|T(t,t_k)\hat{u}_i(t_k^-)\|\Big)^2 \le \chi\sum\limits_{i = 1}^n \Big(\sum\limits_{t_k\le t}e^{-\delta(t-t_k)}\|\hat{u}_i(t_k^-)\|\Big)^2\\ & = \chi\sum\limits_{i = 1}^n \Big(\sum\limits_{t_k\le t}e^{-\frac{\delta}{2}(t-t_k)}e^{-\frac{\delta}{2}(t-t_k)}\|\hat{u}_i(t_k^-)\|\Big)^2 \le \chi\sum\limits_{i = 1}^n \Big(\sum\limits_{t_k\le t}\big(e^{-\frac{\delta}{2}(t-t_k)}\big)^2\Big)\Big(\sum\limits_{t_k\le t}\big(e^{-\frac{\delta}{2}(t-t_k)}\big)^2\|\hat{u}_i(t_k^-)\|^2\Big)\\ & \le \chi \Big(\sum\limits_{t_k\le t}e^{-\delta(t-t_k)}\Big)\Big(\sum\limits_{t_k\le t}e^{-\delta(t-t_k)}\sum\limits_{i = 1}^n\|\hat{u}_i(t_k^-)\|^2\Big)\le \frac{\chi}{1-e^{-\delta\eta}}\sum\limits_{t_k\le t}e^{-\delta(t-t_k)}\|u(t_k^-)\|^2. \end{align} | (3.21) |
Combining (3.14)–(3.21), we have
\begin{align} \|z(t)\|^2 \le &\ \rho_1 e^{-\delta t}+\rho_2\int_0^t e^{-\delta(t-s)}\|z(s)\|^2ds+\rho_3\int_0^t e^{-\delta(t-s)}\|z(s-\tau)\|^2ds\\ & +\rho_4\int_0^t e^{-\delta(t-s)}\int_0^{+\infty}k(r)\|z(s-r)\|^2drds+\rho_5\int_0^t e^{-\delta(t-s)}\|u(s)\|^2ds\\ & +\rho_6\sum\limits_{t_k\le t}e^{-\delta(t-t_k)}\|u(t_k^-)\|^2, \end{align} | (3.22) |
where \rho_1 = 6\chi\|\phi\|_{\mathcal{PC}^b}^2 , \rho_2 = \frac{6n\chi}{\delta}\sum_{i = 1}^n \sum_{j = 1}^n |b_{ij}|^2 l_j^2 , \rho_3 = \frac{6n\chi}{\delta}\sum_{i = 1}^n \sum_{j = 1}^n |p_{ij}|^2 l_j^2 , \rho_4 = \frac{6n\chi}{\delta}\sum_{i = 1}^n \sum_{j = 1}^n |q_{ij}|^2 l_j^2 , \rho_5 = \frac{6\chi}{\delta} , \rho_6 = \frac{6\chi}{1-e^{-\delta\acute{\eta}}} . Combining \|z(t)\|^2\le \|\phi\|_{\mathcal{PC}^b}^2 for t\le 0 , it follows from Lemma 5 that there exist constants { 0 < \lambda < \delta\land\lambda^* } such that \Theta(\lambda) < 1
\begin{equation} \|z(t)\|^2 \le Ne^{-\lambda t} + \kappa\sup\limits_{0\le s\le t}\|u(s)\|^2, \end{equation} | (3.23) |
where N = (\frac{2\chi}{1-\Theta(\lambda)}+1)\|\phi\|_{\mathcal{PC}^b}^2 and \kappa = \frac{\chi}{\mu}(1+\frac{\delta^2}{(1-e^{\delta\acute{\eta}})^2}) . Therefore, the DRDNNs with multiple impulses are UISS over the class \mathcal{F}(\acute{\eta}, \grave{\eta}) .
Remark 2. In [28], the ISS property of stochastic delayed neural networks are investigated to show that the ISS of continuous dynamics can be retained under certain destabilising impulses. Then, the ISS criteria of DRDNNs with impulses were established by an impulsive delay inequality in [27], where two scenarios are considered: stabilising continuous dynamics with destabilising impulses and destabilising continuous dynamics with stabilising impulses. One can notice that these results in [27,28] focused on the single impulse effect (stabilising or destabilising impulses), and ignored the hybrid effect of multiple impulses. In comparison, the results established here indicate that the ISS property of continuous dynamics can be retained under certain multiple impulsive disturbance and the unstable continuous dynamics can be stabilised by multiple impulsive control, if the intervals between the multiple impulses are bounded.
Remark 3. In most of the existing works on ISS of neural networks [20,27,28], the ISS criteria are usually established by the Lyapunov method and extended Halanay-type inequalities. In comparison, the ISS criteria in this paper are established by direct estimate of mild solution and an integral inequality to handle the multiple impulses and infinite distributed delays.
If the multiple impulses degenerate into single impulses, that is, stabilising impulses or destabilising impulses, we have the following corollaries from Theorem 1.
Corollary 1. Assume that \grave{\mu} = \grave{\delta}^2-\frac{6n}{\grave{\sigma}}\sum_{i = 1}^n\sum_{j = 1}^n(|b_{ij}|^2+|p_{ij}|^2+|q_{ij}|^2/\lambda^*)l_j^2 > 0 , where \grave{\delta} = \vartheta-\frac{\ln\grave{\sigma}}{\grave{\eta}} > 0 , \grave{\sigma} = \grave{\sigma}_k < 1 , k\in\mathbb{N} , and N = 1 . Then, the DRDNNs with stabilising impulses (2.5) are UISS over the class \mathcal{F}^+(\grave{\eta})\cap\mathcal{F}_-(\acute{\eta}) for arbitrary \acute{\eta} > 0 .
Corollary 2. Assume that \acute{\mu} = \acute{\delta}^2-6n\acute{\sigma}\sum_{i = 1}^n\sum_{j = 1}^n(|b_{ij}|^2+|p_{ij}|^2+|q_{ij}|^2/\lambda^*)l_j^2 > 0 , where \acute{\delta} = \vartheta-\frac{\ln\acute{\sigma}}{\acute{\eta}} > 0 , \acute{\sigma} = \acute{\sigma}_k > 1 , k\in\mathbb{N} , and N = 1 . Then, the DRDNNs with destabilising impulses (2.5) are UISS over the class \mathcal{F}_-(\acute{\eta}) .
Remark 4. The corollaries with stabilising or destabilising impulses accord with the results in [27,43]. But the infinite distributed delays are additionally included in neural-network model, so our results are more general.
In this section, the effectiveness of theoretical results are demonstrated by two numerical examples.
Example 1. Consider the DRDNNs with multiple impulses which consist of two neurons on \mathcal{O} = \{x|-1\le x\le 1\} , where the parameters are given by B = 0 , and
\begin{equation*} D = \left( \begin{array}{cc} 0.2 & 0\\ 0 & 0.3 \end{array} \right),\ A = \left( \begin{array}{cc} 0.3 & 0\\ 0 & 0.4 \end{array} \right),\ P = \left( \begin{array}{cc} 1 & -0.5\\ 0.5 & 1 \end{array} \right),\ Q = \left( \begin{array}{cc} 0.2 & 0.3\\ 0.1 & 0.4 \end{array} \right), \end{equation*} |
and \tau = 0.1 , k(s) = e^{-s} , f(s) = \tanh(s)/3 . The initial condition is given by
\begin{equation*} z_1(t,x) = \bigg\{ \begin{array}{ll} \frac{1}{10}\cos(\frac{x\pi}{2}), & t\in [-5,0],\\ 0, & t\in(-\infty,-5), \end{array}\ z_2(t,x) = \bigg\{ \begin{array}{ll} \frac{1}{10}\sin(x\pi), & t\in [-5,0],\\ 0, & t\in(-\infty,-5), \end{array} \end{equation*} |
where x\in\mathcal{O} , and the boundary condition is the homogeneous Dirichlet boundary condition. Then we have the following result from Theorem 1 and Corollary 1.
Corollary 3. The DRDNNs (2.5) with the above parameters are input-to-state stable via the following multiple impulsive control (Ⅰ) or stabilising impulsive control (Ⅱ):
(Ⅰ): \frac{\ln 4}{\grave{\eta}}-\frac{\ln 4/3}{\acute{\eta}} > 7.9244 , \sigma_{2k-1} = \frac{4}{3} , \sigma_{2k} = \frac{1}{4} , k\in\mathbb{N} ;
(Ⅱ): \grave{\eta} < 0.3499 , \acute{\eta} > 0 , \sigma_k = \frac{1}{4} , k\in\mathbb{N} .
Figure 1 illustrates the state norms of the DRDNNs via the impulsive control (Ⅰ) and (Ⅱ) under the external input by u_1(t, x) = \sin(x\pi) and u_2(t, x) = \frac{1}{t+1}\cos(t\pi/2) . We can see that the DRDNNs are bounded under the bounded spatiotemporal external input, which corresponds to the ISS property.
Example 2. Consider the DRDNNs with multiple impulses which consist of two neurons on \mathcal{O} = \{x|-1\le x\le 1\} , where the parameters are given by
\begin{equation*} D = \left( \begin{array}{cc} 0.4 & 0\\ 0 & 0.5 \end{array} \right),\ A = \left( \begin{array}{cc} 1 & 0\\ 0 & 1.9 \end{array} \right),\ B = \left( \begin{array}{cc} -0.6 & 0.8\\ 0.4 & -0.4 \end{array} \right),\ P = \left( \begin{array}{cc} 0.1 & 0.4\\ 0.4 & -0.6 \end{array} \right),\ Q = \left( \begin{array}{cc} 0.4 & 0.3\\ 0.5 & 0.4 \end{array} \right), \end{equation*} |
and \tau = 1.8 , k(s) = e^{-s} , f(s) = 0.1\tanh(s) . The initial condition is given by
\begin{equation*} z_1(t,x) = \bigg\{ \begin{array}{ll} \cos(\frac{x\pi}{2}), & t\in [-5,0],\\ 0, & t\in(-\infty,-5), \end{array}\ z_2(t,x) = \bigg\{ \begin{array}{ll} \sin(x\pi), & t\in [-5,0],\\ 0, & t\in(-\infty,-5), \end{array} \end{equation*} |
where x\in\mathcal{O} , and the boundary condition is the homogeneous Dirichlet boundary condition. Then we have the following result from Theorem 1 and Corollary 2.
Corollary 4. The DRDNNs (2.5) with the above parameters are input-to-state stable with the following multiple impulsive disturbance (Ⅲ) or destabilising impulsive disturbance (Ⅳ):
(Ⅲ): \frac{\ln 3}{\acute{\eta}}-\frac{\ln 2}{\grave{\eta}} < 1.2423 , \sigma_{3k-2} = \frac{3}{2} , \sigma_{3k-1} = 2 , \sigma_{3k} = \frac{1}{2} , k\in\mathbb{N} ;
(Ⅳ): \acute{\eta} > 0.9791 , \sigma_k = \frac{3}{2} , k\in\mathbb{N} .
Figure 2 illustrates the state norms of the DRDNNs with the impulsive disturbance (Ⅲ) and (Ⅳ) under the external input by u_1(t, x) = \cos(t\pi/10)\sin(x\pi) and u_2(t, x) = (\sin(t\pi/10)+1)\cos(t\pi/2) in the continuous dynamics, where the ISS property is also observed.
Remark 5. Because of the multiple impulses and the infinite distributed delays, the existing results in [27,28,43] are invalid for these two numerical examples.
This work addresses the ISS issues of DRDNNs with multiple impulses after reformulating the neural-network model in term of an abstract impulsive functional differential equation. The ISS property is studied by the direct estimate of mild solution and an integral inequality with infinite distributed delay. The obtained results show that the ISS property can be ensured if the intervals between the multiple impulses are bounded. Note that the impulsive sequences considered here have a fixed dwell-time. A more general class of impulsive sequences satisfying the average dwell-time condition is considered in ISS literature [8,9]. Thus, the further work will focus on ISS analysis of impulsive DRDNNs under the average dwell-time condition.
This work was supported by the National Natural Science Foundation of China (61673247), the Research Fund for Excellent Youth Scholars of Shandong Province (JQ201719), the China Postdoctoral Science Foundation (2020M672109), the Shandong Province Postdoctoral Innovation Project.
The authors declare that there are no conflicts of interest.
[1] | Y. LeCun, Y. Bengio, G. E. Hinton, Deep learning, Nature, 521 (2015), 436–444. doi: 10.1038/nature14539 |
[2] | P. Zeng, H. Li, H. He, S. Li, Dynamic energy management of a microgrid using approximate dynamic programming and deep recurrent neural network learning, IEEE T. Smart Grid, 10 (2018), 4435–4445. |
[3] | W. H. Chen, S. Luo, W. X. Zheng, Impulsive synchronization of reaction-diffusion neural networks with mixed delays and its application to image encryption, IEEE T. Neur. Net. Lear., 27 (2016), 2696–2710. doi: 10.1109/TNNLS.2015.2512849 |
[4] | H. Ma, H. Li, R. Lu, T. Huang, Adaptive event-triggered control for a class of nonlinear systems with periodic disturbances, Sci. China Inform. Sci., 63 (2020), 1–15. doi: 10.1007/s11431-019-9532-5 |
[5] | W. Xiao, L. Cao, H. Li, R. Lu, Observer-based adaptive consensus control for nonlinear multi-agent systems with time-delay, Sci. China Inform. Sci., 63 (2020), 1–17. doi: 10.1007/s11431-019-9532-5 |
[6] | Z. Wu, H. R. Karimi, P. Shi, Dissipativity-based small-gain theorems for stochastic network systems, IEEE T. Automat. Contr., 61 (2015), 2065–2078. |
[7] | E. D. Sontag, Smooth stabilization implies coprime factorization, IEEE T. Automat. Contr., 34 (1989), 435–443. doi: 10.1109/9.28018 |
[8] | J. P. Hespanha, D. Liberzon, A. R. Teel, Lyapunov conditions for input-to-state stability of impulsive systems, Automatica, 44 (2008), 2735–2744. doi: 10.1016/j.automatica.2008.03.021 |
[9] | S. Dashkovskiy, A. Mironchenko, Input-to-state stablility of nonlinear impulsive systems, SIAM J. Control Optim., 51 (2013), 1962–1987. doi: 10.1137/120881993 |
[10] | C. Cai, A. R. Teel, Robust input-to-state stability for hybrid systems, SIAM J. Control Optim., 51 (2013), 1651–1678. doi: 10.1137/110824747 |
[11] | W. H. Chen, W. Z. Zheng, Input-to-state stability and integral input-to-state stability of nonlinear impulsive systems with delays, Automatica, 45 (2009), 1481–1488. doi: 10.1016/j.automatica.2009.02.005 |
[12] | S. Dashkovskiy, M. Kosmykov, A. Mironchenko, L. Naujok, Stability of interconnected impulsive systems with and without time delays, using Lyapunov methods, Nonlinear Anal. Hybri., 6 (2012), 899–915. doi: 10.1016/j.nahs.2012.02.001 |
[13] | X. Wu, Y. Tang, W. Zhang, Input-to-state stability of impulsive stochastic delayed systems under linear assumptions, Automatica, 66 (2016), 195–204 doi: 10.1016/j.automatica.2016.01.002 |
[14] | X. Wu, Y. Tang, J. Cao, Input-to-state stability of time-varying switched systems with time-delays, IEEE T. Automat. Contr., 64 (2019), 2537–2544. doi: 10.1109/TAC.2018.2867158 |
[15] | R. Rao, X. Li, Input-to-state stability in the meaning of switching for delayed feedback switched stochastic finanical system, AIMS Mathematics, 6 (2020), 1040–1064. |
[16] | C. K. Ahn, Passive learning and input-to-state stability of switched Hopfield neural networks with time-delay, Inform. Sciences, 180 (2010), 4582–4594. doi: 10.1016/j.ins.2010.08.014 |
[17] | R. Wei, J. Cao, J. Kurths, Novel fixed-time stabilization of quaternion-valued BAMNNs with disturbances and time-varying coefficients, AIMS Mathematics, 5 (2020), 3089–3110. doi: 10.3934/math.2020199 |
[18] | Z. Yang, W. Zhou, T. Huang, Exponential input-to-state stability of recurrent neural networks with multiple time-varying delays, Cogn. Neurodynamics, 8 (2014), 47–54. doi: 10.1007/s11571-013-9258-9 |
[19] | Q. Zhu, J. Cao, R. Rakkiyappan, Exponential input-to-state stability of stochastic Cohen-Grossberg neural networks with mixed delays, Nonlinear Dynam., 79 (2015), 1085–1098. doi: 10.1007/s11071-014-1725-2 |
[20] | L. Liu, J. Cao, C. Qian, Pth moment exponential input-to-state stability of delayed recurrent neural networks with Markovian switching via vector Lyapunov function, IEEE T. Neur. Net. Lear., 29 (2017), 3152–3163. |
[21] | X. Li, T. Caraballo, R. Rakkiyappan, X. Han, On the stability of impulsive functional differential equations with infinite delay, Math. Method. Appl. Sci., 38 (2015), 3130–3140. doi: 10.1002/mma.3303 |
[22] | A. Chaillet, G. Detorakis, S. Palfi, S. Senova, Robust stabilization of delayed neural fields with partial measurement and actuation, Automatica, 83 (2017), 262–274. doi: 10.1016/j.automatica.2017.05.011 |
[23] | J. G. Lu, Global exponential stability and periodicity of reaction-diffusion delayed recurrent neural networks with Dirichlet boundary conditions, Chaos Soliton. Fract., 35 (2008), 116–125. doi: 10.1016/j.chaos.2007.05.002 |
[24] | J. Cao, G. Stamov, I. Stamova, S. Simeonov, Almost periodicity in impulsive fractional-order reaction-diffusion neural networks with time-varying delays, IEEE Trans. Cybern., 51 (2021), 151–161. doi: 10.1109/TCYB.2020.2967625 |
[25] | Y. Sheng, H. Zhang, Z. Zeng, Stability and robust stability of stochastic reaction-diffusion neural networks with infinite discrete and distributed delays, IEEE T. Syst. Man Cy. S., 50 (2018), 1721–1732. |
[26] | J. Zhou, S. Xu, B. Zhang, Y. Zou, H. Shen, Robust exponential stability of uncertain stochastic neural networks with distributed delays and reaction-diffusions, IEEE T. Neur. Net. Lear., 23 (2012), 1407–1416. doi: 10.1109/TNNLS.2012.2203360 |
[27] | Z. Yang, W. Zhou, T. Huang, Input-to-state stability of delayed reaction-diffusion neural networks with impulsive effects, Neurocomputing, 333 (2019), 261–272. doi: 10.1016/j.neucom.2018.12.019 |
[28] | J. Li, W. Zhou, Z. Yang, State estimation and input-to-state stability of impulsive stochastic BAM neural networks with mixed delays, Neurocomputing, 227 (2017), 37–45. doi: 10.1016/j.neucom.2016.08.101 |
[29] | K. N. Wu, M. Z. Ren, X. Z. Liu, Exponential input-to-state stability of stochastic delay reaction-diffusion neural networks, Neurocomputing, 412 (2020), 399–405. doi: 10.1016/j.neucom.2019.09.118 |
[30] | T. Wei, X. Li, V. Stojanovic, Input-to-state stability of impulsive reaction-diffusion neural networks with infinite distributed delays, Nonlinear Dynam., 103 (2021), 1733–1755. doi: 10.1007/s11071-021-06208-6 |
[31] | A. Mironchenko, C. Prieur, Input-to-state stability of infinite-dimensional systems: Recent results and open questions, SIAM Rev., 62 (2020), 529–614. doi: 10.1137/19M1291248 |
[32] | I. Karafyllis, M. Krstic, Input-to-state stability for PDEs, Springer International Publishing, 2019. |
[33] | W. He, F. Qian, J. Cao, Pinning-controlled synchronization of delayed neural networks with distributed-delay coupling via impulsive control, Neural Networks, 85 (2017), 1–9. doi: 10.1016/j.neunet.2016.09.002 |
[34] | H. Zhang, T. Ma, G. Huang, Z. Wang, Robust global exponential synchronization of uncertain chaotic delayed neural networks via dual-stage impulsive control, IEEE T. Syst. Man Cy. B, 40 (2009), 831–844. |
[35] | J. Hu, G. Sui, X. Lv, X. Li, Fixed-time control of delayed neural networks with impulsive perturbations, Nonlinear Anal. Model., 23 (2018), 904–920. doi: 10.15388/NA.2018.6.6 |
[36] | X. Li, D. O'Regan, H. Akca, Global exponential stabilization of impulsive neural networks with unbounded continuously distributed delays, IMA J. Appl. Math., 80 (2015), 85–99. doi: 10.1093/imamat/hxt027 |
[37] | X. Li, J. Shen, R. Rakkiyappan, Persistent impulsive effects on stability of functional differential equations with finite or infinite delay, Appl. Math. Comput., 329 (2018), 14–22. doi: 10.1016/j.amc.2018.01.036 |
[38] | S. Dashkovskiy, P. Feketa, Input-to-state stability of impulsive systems and their networks, Nonlinear Anal. Hybri., 26 (2017), 190–200. doi: 10.1016/j.nahs.2017.06.004 |
[39] | P. Li, X. Li, J. Lu, Input-to-state stability of impulsive delay systems with multiple impulses, IEEE T. Automat. Contr, 66 (2020), 362–368. |
[40] | A. Pazy, Semigroups of linear operators and applications to partial differential equations, Springer-Verlag, New York, 1983. |
[41] | D. Xu, B. Li, S. Long, L. Teng, Moment estimate and existence for solutions of stochastic functional differential equations, Nonlinear Anal. Theor., 108 (2014), 128–143. doi: 10.1016/j.na.2014.05.004 |
[42] | L. Gawarecki, V. Mandrekar, Stochastic differential equations in infinite dimensions: with applications to stochastic partial differential equations, Springer Science & Business Media, 2010. |
[43] | D. Li, G. Chen, Impulses-induced p-exponential input-to-state stability for a class of stochastic delayed partial differential equations, Int. J. Control, 92 (2019), 1827–1835. doi: 10.1080/00207179.2017.1414309 |
1. | Yahan Deng, Zhenhai Meng, Hongqian Lu, Adaptive event-triggered state estimation for complex networks with nonlinearities against hybrid attacks, 2022, 7, 2473-6988, 2858, 10.3934/math.2022158 | |
2. | Daliang Zhao, New Results on Controllability for a Class of Fractional Integrodifferential Dynamical Systems with Delay in Banach Spaces, 2021, 5, 2504-3110, 89, 10.3390/fractalfract5030089 | |
3. | Kegang Zhao, A new approach to persistence and periodicity of logistic systems with jumps, 2021, 6, 2473-6988, 12245, 10.3934/math.2021709 | |
4. | Lixuan Zhang, Xuefei Yang, On pole assignment of high-order discrete-time linear systems with multiple state and input delays, 2022, 15, 1937-1632, 3351, 10.3934/dcdss.2022022 | |
5. | Daliang Zhao, Yansheng Liu, Controllability of nonlinear fractional evolution systems in Banach spaces: A survey, 2021, 29, 2688-1594, 3551, 10.3934/era.2021083 | |
6. | Xingyue Liu, Kaibo Shi, Further results on stability analysis of time-varying delay systems via novel integral inequalities and improved Lyapunov-Krasovskii functionals, 2022, 7, 2473-6988, 1873, 10.3934/math.2022108 | |
7. | Weipeng Tai, Dandan Gao, Anqi Zhao, Jianping Zhou, Xiaolin Wang, Weight learning for ℋ∞$$ {\mathscr{H}}_{\infty } $$ stabilization of uncertain switched neural networks with external disturbance and reaction‐diffusion, 2023, 0890-6327, 10.1002/acs.3558 | |
8. | Zhuo Xue, Xin-Xin Han, Kai-Ning Wu, Exponential input-to-state stability of non-linear reaction–diffusion systems with Markovian switching, 2024, 54, 1751570X, 101534, 10.1016/j.nahs.2024.101534 | |
9. | Chunxiang Li, Fangshu Hui, Fangfei Li, Stability of Differential Systems with Impulsive Effects, 2023, 11, 2227-7390, 4382, 10.3390/math11204382 | |
10. | Zhuo Xue, Xin-Xin Han, Kai-Ning Wu, Mean square exponential input-to-state stability of stochastic Markovian reaction-diffusion systems with impulsive perturbations, 2023, 360, 00160032, 7085, 10.1016/j.jfranklin.2023.05.021 | |
11. | Anatoliy Martynyuk, Gani Stamov, Ivanka Stamova, Yulya Martynyuk–Chernienko, Regularization scheme for uncertain fuzzy differential equations: Analysis of solutions, 2023, 31, 2688-1594, 3832, 10.3934/era.2023195 | |
12. | Hae Yeon Park, Jung Hoon Kim, Model-free control approach to uncertain Euler-Lagrange equations with a Lyapunov-based $ L_\infty $-gain analysis, 2023, 8, 2473-6988, 17666, 10.3934/math.2023902 | |
13. | Huiyu Wang, Shutang Liu, Xiang Wu, Jie Sun, Wei Qiao, Stability of Fractional Reaction-Diffusion Memristive Neural Networks Via Event-Based Hybrid Impulsive Controller, 2024, 56, 1573-773X, 10.1007/s11063-024-11509-z | |
14. | Rouzimaimaiti Mahemuti, Ehmet Kasim, Hayrengul Sadik, Stochastic Synchronization of Impulsive Reaction–Diffusion BAM Neural Networks at a Fixed and Predetermined Time, 2024, 12, 2227-7390, 1204, 10.3390/math12081204 |