Loading [MathJax]/jax/element/mml/optable/Latin1Supplement.js
Research article Special Issues

Exponential synchronization control of delayed memristive neural network based on canonical Bessel-Legendre inequality

  • In this paper, we study the exponential synchronization problem of a class of delayed memristive neural networks(MNNs). Firstly, a intermittent control scheme is designed to solve the parameter mismatch problem of MNNs. A discontinuous controller with two tunable scalars is designed, and the upper limit of control gain can be adjusted flexibly. Secondly, an augmented Lyaponov-Krasovskii functional(LKF) is proposed, and vector information of N-order canonical Bessel-Legendre(B-L) inequalities is introduced. LKF method is used to obtain the stability criterion to ensure exponential synchronization of the system. The conservatism of the result decreases with the increase of the order of the B-L inequality. Finally, the effectiveness of the main results is verified by two simulation examples.

    Citation: Xingxing Song, Pengfei Zhi, Wanlu Zhu, Hui Wang, Haiyang Qiu. Exponential synchronization control of delayed memristive neural network based on canonical Bessel-Legendre inequality[J]. AIMS Mathematics, 2022, 7(3): 4711-4734. doi: 10.3934/math.2022262

    Related Papers:

    [1] Zhifeng Lu, Fei Wang, Yujuan Tian, Yaping Li . Lag synchronization of complex-valued interval neural networks via distributed delayed impulsive control. AIMS Mathematics, 2023, 8(3): 5502-5521. doi: 10.3934/math.2023277
    [2] Tao Xie, Xing Xiong, Qike Zhang . Synchronization robustness analysis of memristive-based neural networks with deviating arguments and stochastic perturbations. AIMS Mathematics, 2024, 9(1): 918-941. doi: 10.3934/math.2024046
    [3] Yinjie Qian, Lian Duan, Hui Wei . New results on finite-/fixed-time synchronization of delayed memristive neural networks with diffusion effects. AIMS Mathematics, 2022, 7(9): 16962-16974. doi: 10.3934/math.2022931
    [4] Chengbo Yi, Rui Guo, Jiayi Cai, Xiaohu Yan . Pinning synchronization of dynamical neural networks with hybrid delays via event-triggered impulsive control. AIMS Mathematics, 2023, 8(10): 25060-25078. doi: 10.3934/math.20231279
    [5] Rakkiet Srisuntorn, Wajaree Weera, Thongchai Botmart . Modified function projective synchronization of master-slave neural networks with mixed interval time-varying delays via intermittent feedback control. AIMS Mathematics, 2022, 7(10): 18632-18661. doi: 10.3934/math.20221025
    [6] Jiaqing Zhu, Guodong Zhang, Leimin Wang . Quasi-projective and finite-time synchronization of fractional-order memristive complex-valued delay neural networks via hybrid control. AIMS Mathematics, 2024, 9(3): 7627-7644. doi: 10.3934/math.2024370
    [7] Călin-Adrian Popa . Synchronization of Clifford-valued neural networks with leakage, time-varying, and infinite distributed delays on time scales. AIMS Mathematics, 2024, 9(7): 18796-18823. doi: 10.3934/math.2024915
    [8] Shuo Ma, Jiangman Li, Qiang Li, Ruonan Liu . Adaptive exponential synchronization of impulsive coupled neutral stochastic neural networks with Lévy noise and probabilistic delays under non-Lipschitz conditions. AIMS Mathematics, 2024, 9(9): 24912-24933. doi: 10.3934/math.20241214
    [9] Rupak Datta, Ramasamy Saravanakumar, Rajeeb Dey, Baby Bhattacharya . Further results on stability analysis of Takagi–Sugeno fuzzy time-delay systems via improved Lyapunov–Krasovskii functional. AIMS Mathematics, 2022, 7(9): 16464-16481. doi: 10.3934/math.2022901
    [10] Ailing Li, Xinlu Ye . Finite-time anti-synchronization for delayed inertial neural networks via the fractional and polynomial controllers of time variable. AIMS Mathematics, 2021, 6(8): 8173-8190. doi: 10.3934/math.2021473
  • In this paper, we study the exponential synchronization problem of a class of delayed memristive neural networks(MNNs). Firstly, a intermittent control scheme is designed to solve the parameter mismatch problem of MNNs. A discontinuous controller with two tunable scalars is designed, and the upper limit of control gain can be adjusted flexibly. Secondly, an augmented Lyaponov-Krasovskii functional(LKF) is proposed, and vector information of N-order canonical Bessel-Legendre(B-L) inequalities is introduced. LKF method is used to obtain the stability criterion to ensure exponential synchronization of the system. The conservatism of the result decreases with the increase of the order of the B-L inequality. Finally, the effectiveness of the main results is verified by two simulation examples.



    In recent years, MNNs has attracted extensive attention because of its broad application prospect in engineering fields such as associative memory, signal processing and pattern recognition [1,2,3,4]. Resistors are used in real systems to simulate the synapses of neural networks and store the historical state of the system. Due to the influence of large volume resistance on the integral density of neural network, the function of synapse cannot be completely simulated. This causes the calculation of neural network circuit to deviate from the real value [5]. Memristors were first proposed by Professor Chua in 1971, and the first physical memristors were developed by HP LABS in 2008. The value of the memristor is determined by the voltage applied to the memristor, its polarity, and the duration of the applied voltage [6]. In 2010, Professor Lu constructed the first electroneural circuit using memristors and verified the memory properties of memristors to the internal states of the system [7,8]. This characteristic suggests that memristors are the electronic components that most closely resemble the synapses of neurons. MNNs was developed by replacing the resistors in traditional neural networks with memristors.

    Compared with traditional neural network, memristor neural network has more powerful information capacity and computing power, which enables it to better deal with problems related to state memory and information processing [9]. In most cases, the control signal is transmitted from one system to another in the wireless communication network, so the nonlinear dynamic behavior will inevitably occur in the control process and the system time delay will be generated [10,11]. When the time lag interval is very large, the design of the stability criterion with time-delay independent will be very conservative [12]. Time delay may also lead to the instability, shock and performance deterioration of the neural network system [13]. Therefore, in order to reduce the conservativeness of the results, the consideration of time delay is an indispensable condition for the design of controllers to keep the system stable.

    Synchronization means that with the development of the network, the state of the driving system and the response system tend to be in a common state, which is the most basic and important dynamic characteristic of the neural network [14]. System synchronization control is often used in communication security, biological systems, signal processing and other fields [15]. Many researchers are studying synchronization control schemes for various neural networks [16,17,18]. In [16], a distributed event-triggered controller is designed to study fixed-time synchronization of coupled MNNs. In [17], the fixed time synchronization problem of coupled MNNs based on decentralized event triggering scheme is studied. In [18], the author designed an exponential attenuation switching event-trigger scheme to study the global stabilition of delayed MNNs. Inspired by [19], this paper designs a new discontinuous feedback control scheme, and a more strict inequality is used to reduce the conservatism. The upper bound of control gain can be reduced by adjusting two adjustable scalars of the controller. Therefore, the controller application here is more flexible.

    In the past, Jensen's inequality has been widely applied as a method to reduce the conservatism of the stability criterion for systems with time delay [20]. However, the integral inequality based on Wirtinger proposed by Seuret and Gouaisbaut was considered to be less conservative than Jensen's inequality, and contains Jensen's inequality [21]. And, many researchers have improved the estimation methods of integral terms, such as Wirtinger inequalities [22], integral inequalities based on free matrices [23], integral inequalities based on auxiliary functions [24]. Recently, the B-L inequality has received more and more attention. It generalizes the above integral inequality and has smaller amplification for some integral terms [25]. However, the integral interval of this inequality is fixed [h,0]. Therefore, B-L inequality has a limited range of applications. The canonical B-L inequality transfers the integral interval to the general interval [a1,a2] by introducing a canonical orthogonal polynomial [26]. At present, most results for the stability of delayed neural networks are to choose a special Bessel-Legendre inequality (N = 1 or N = 2), as mentioned in reference [27]. In order to further reduce the conservatism of the result, Legendre vector information is fully considered in the design of the augmented LKF. By using the canonical B-L inequality and anti-convex inequality lemma, the criterion of exponential synchronization of the system is obtained.

    This paper deals with the exponential synchronization control of delayed MNNs based on discontinuous feedback controllers. We take the average of the maximum and minimum weights of the memristor synapse as the weight of the memristor synapse and transform the state parameters of the memristor neural network into the traditional neural network with uncertain parameters. The main contributions of this paper are as follows:

    (1) In order to solve the parameter mismatch problem of MNNs, intermittent control scheme is adopted in the synchronization of master-slave neural network system to relax the strict assumptions. The designed discontinuous controller has two tunable scalars, which can adjust the upper limit of control gain flexibly and reduce control cost.

    (2) In order to reduce the conservatism of the results, this paper chooses to use canonical B-L inequality to estimate the bounds of the integral term. It has been shown in [28] that if the LKF used is less relevant to the B-L inequality, the effect of obtaining tighter bounds of the inequality will be greatly reduced. Therefore, this paper constructs a suitable augmented LKF and fully considers the Legendre polynomial information.

    Finally, the stability criterion with low conservatism is obtained by using canonical B-L inequality, and the conservatism decreases with the increase of N.

    Notations: In this paper, the superscript T of the matrix is the transpose of the matrix, and 1 is its inverse. The Rn means the n-dimensional Euclidean space. Sn+ means the set of the positive definite matrices of Rn×n. The symbol refers to the Euclidean vector norm. There (ji)=j!(ji)!i!.

    The time-delay neural network can be realized by large-scale integrated circuits using memory resistors, which represent connection weights. According to Kirchhoff's current law, the equation of the pth subsystem can be described as follows:

    Cp˙xp(t)=[nq=1(Wfpq+Wgpq)+1Rp]xp(t)+sgnpqnq=1Wfpqlq(xq(t))+Ip(t)+sgnpqnq=1Wfpqlq(xq(thq(t))) (1)

    where p=1,2,...,n, xp(t) is the voltage of the capacitor Cp at t0, and Rp is resistance in parallel with Cp. Ip is the external input or bias, and lq is the activation function. hq(t) is the time-varying delay of the transmission of the qth neuron, and it satisfies

    0h(t)h,μ1˙h(t)μ2 (2)

    Mfpq is the memristor connecting the activation function lq(xq(t)) to xp(t), Mgpq is the memristor connecting the activation function lq(xq(thq(t))) to xp(th(t)). The memductance of memristors Mfpq and Mgpq is expressed in terms of Wfpq and Wgpq respectively.

    sgnpq={1,pq1,p=q

    The initial conditions for system (1) are xp(t)=ϕp(t)C([h,0],R), h=max1qn{hq}, μ=max1qn{μq}.

    As described by Chua [29], depending on the computer information being encoded only in '0' and '1', the memristor only needs to display two different states. System (1) can be rewritten as

    ˙xp(t)=kpxp(t)+nq=1bpq(xp(t))lq(xq(t)) +nq=1dpq(xp(t))lq(xq(thq(t)))+˜Lp(t) (3)

    where

    kp=1Cp[nq=1(Wfpq+Wgpq)],˜Lp(t)=Ip(t)Cp
    bpq(xp(t))=sgnpq×WfpqCp={bˊ
    d_{pq}(x_p(t)) = sgn_{pq}\times \dfrac{W_{gpq}}{C_p} = \left\{\begin{array}{c c} \acute{d}_{pq}, \: |x_p(t)|\leq \mathcal{T}_p \\ \grave{d}_{pq}, \: |x_p(t)| > \mathcal{T}_p \end{array}\right.

    and the switching jumps \mathcal{T}_p > 0 , \acute{b}_{pq} , \grave{b}_{pq} , \acute{d}_{pq} , and \grave{d}_{pq} , p, q = 1, 2, ..., n , are constants.

    Consider system (3) is taken as the driving system, and the corresponding responsive system is:

    \begin{equation} \begin{aligned} \dot{y}_p(t)& = -k_py_p(t)+\sum^{n}_{q = 1}b_{pq}(y_p(t))l_q(y_q(t))\\ \ & +\sum^{n}_{q = 1}d_{pq}(y_p(t))l_q(y_q(t-h_q(t)))+\tilde{L}_p(t)+u_p(t) \end{aligned} \end{equation} (4)

    where

    b_{pq}(y_p(t)) = \left\{\begin{array}{c c} \acute{b}_{pq}, \: |y_p(t)|\leq \mathcal{T}_p \\ \grave{b}_{pq}, \: |y_p(t)| > \mathcal{T}_p \end{array}\right.
    d_{pq}(y_p(t)) = \left\{\begin{array}{c c} \acute{d}_{pq}, \: |y_p(t)|\leq \mathcal{T}_p \\ \grave{d}_{pq}, \: |y_p(t)| > \mathcal{T}_p \end{array}\right.

    where u_p(t) is the appropriate control input to be designed, the initial condition of system (4) are y_p(t) = \varphi_p(t)\in\mathcal{C}([-h, 0], \mathbb{R}), p = 1, 2, ..., n .

    It can be seen that b_{pq}(x_p(t)) and d_{pq}(x_p(t)) are discontinuous, and the system (3) is a discontinuous switching system. In this case, the classical solution is not available. Therefore, the solution of this system is processed in Filippov sense. Now, the following definition is given.

    Definition 1. [30] Consider the system \dot{x}(t) = F(x) , x\in{R}^n with discontinuous right-hand sides, a set-valued map is defined as

    \begin{equation} \Phi(x) = \bigcap\limits_{\mu > 0}\bigcap\limits_{\delta(R) = 0}\overline{co}[F(B(x, \mu)\backslash{R})] \end{equation} (5)

    where \overline{co}[G] is the closure of the convex hull of set G, B(x, \mu) = \{y:\|y-x\|\leqslant \mu\} , and \delta(R) is the Lebesgue measure of set R. A solution in Filippov's sense of the Cauchy problem for the above system with initial condition x(0) = x_0 is an absolutely continuous function x(t), t\in[0, T] , which satisfies x(0) = x_0 and differential inclusion: \dot{x}(t)\in\Phi(x) , for a.e. t\in [0, T]

    The research in this paper requires the following assumptions.

    Assumption 1. There exist constants m_p , such that for any u\in\mathbb{R} , |l_p(u)| \leq m_p , p = 1, 2, ..., n is true.

    Assumption 2. The neuron activation function in system (3) satisfies l_q(0) = 0 and

    \begin{equation} \sigma^-_q \leq \dfrac{l_q(u)-l_q(v)}{u-v} \leq \sigma^+_q , u \neq v , \;q = 1, 2, ..., n \end{equation} (6)

    where \sigma^-_q, \sigma^+_q are some constants, and K_1 = diag\{\sigma^-_1, ..., \sigma^-_n\} , K_2 = diag\{\sigma^+_1, ..., \sigma^+_n\}

    Remark 1. Obviously, system (3) is a discontinuous state-dependent switching system. "Filippov proposes that the solutions of discontinuous systems have the same set of solutions contained in the definite derivatives [30]." According to the theory of differential inclusion and the definition of Filippov solutions, system (3) can be rewritten as:

    \begin{equation} \begin{aligned} \dot{x}_p(t) = &-k_px_p(t)+\sum^{n}_{q = 1}co\{\acute{b}_{pq} , \grave{b}_{pq}\}l_q(x_q(t))\\ & +\sum^{n}_{q = 1}co\{\acute{d}_{pq} , \grave{d}_{pq}\}l_q(x_q(t-h_q(t)))+\tilde{L}_p(t) \end{aligned} \end{equation} (7)

    where co\{\acute{b}_{pq}, \grave{b}_{pq}\} = [\underline{b}_{pq}, \overline{b}_{pq}] , co\{\acute{d}_{pq}, \grave{d}_{pq}\} = [\underline{d}_{pq}, \overline{d}_{pq}] , \underline{b}_{pq} = min\{\acute{b}_{pq}, \grave{b}_{pq}\} , \overline{b}_{pq} = max\{\acute{b}_{pq}, \grave{b}_{pq}\} , \underline{d}_{pq} = min\{\acute{d}_{pq}, \grave{d}_{pq}\} , \overline{d}_{pq} = max\{\acute{d}_{pq}, \grave{d}_{pq}\} .

    Denote the interval matrices [\underline B, \overline B] = [\underline{b}_{pq}, \overline{b}_{pq}]_{n\times{n}} , [\underline D, \overline D] = [\underline{d}_{pq}, \overline{d}_{pq}]_{n\times{n}} . The system (6) can be rewritten as the following matrix form

    \begin{equation} \begin{aligned} \dot{x}(t)\in &-Kx(t)+[\underline B, \overline B]l(x(t))\\ &+[\underline D, \overline D]l(x(t-h(t)))+\tilde{L}(t) \end{aligned} \end{equation} (8)

    where x(t) = [x_1(t), x_2(t), ..., x_n(t)]^T\in \mathbb{R}^n , l(x(t)) = [l_1(x_1(t)), l_2(x_2(t)), ..., l_n(x_n(t))]^T\in \mathbb{R}^n , l(x(t-h(t))) = [l_1(x_1(t-h_1(t))), l_2(x_2(t-h_2(t))), ..., l_n(x_n(t-h_n(t)))]^T\in \mathbb{R}^n , \tilde{L}(t) = [\tilde{L}_1(t), \tilde{L}_2(t), ..., \tilde{L}_n(t)]^T\in \mathbb{R}^n , K = diag\{k_1, k_2, ..., k_n\} , there are measurable function B(x(t))\in [\underline B, \overline B] , D(x(t))\in [\underline D, \overline D] such that system (8) can be of the form

    \begin{equation} \begin{aligned} \dot{x}(t) = &-Kx(t)+B(x(t))l(x(t))\\ &+D(x(t))l(x(t-h(t)))+\tilde{L}(t) \end{aligned} \end{equation} (9)

    Similarly, from system (4) we have

    \begin{equation} \begin{aligned} \dot{y}(t) = &-Ky(t)+B(y(t))l(y(t))+\tilde{L}(t)\\ &+D(y(t))l(y(t-h(t)))+U(t) \end{aligned} \end{equation} (10)

    where B(y(t))\in [\underline B, \overline B] , D(y(t))\in [\underline D, \overline D] , U(t) = [u_1(t), u_2(t), ..., u_n(t)]^T\in \mathbb{R}^n .

    Define the synchronization error e(t) = y(t)-x(t) . Then we can obtain the following synchronization error system

    \begin{equation} \begin{aligned} \dot{e}(t) = &-Ke(t)+B(y(t))f(e(t))+N(t)\\ &+D(y(t))f(e(t-h(t)))+U(t) \end{aligned} \end{equation} (11)

    where f(e(t)) = l(y(t))-l(x(t)) , f(e(t-h(t))) = l(y(t-h(t)))-l(x(t-h(t))) , N(t) = [B(y(t))-B(x(t))]l(x(t))+[D(y(t))-D(x(t))]l(x(t-h(t)))

    After the state measurements of the master-slave system are transmitted to the processor, the synchronization error e(z_k) is calculated and used to construct the controller

    \begin{equation} U(t) = -\tilde{K}e(z_k)-Csgn(v_1e(t)+v_2\dot e(t)) \end{equation} (12)

    where \tilde{K} is the controller gain to be determined. The updated control parameters are transmitted to the zero-order hold (ZOH) over the communication network.

    Definition 2. [31] If the error system (15) is exponentially stable, then the primary system (9) and the slave system (10) are exponentially synchronized. There are two positive scalars \alpha , \beta that satisfy

    \begin{equation} \|e(t)\|\leq \alpha e^{-\beta t}\underset{-h\leq z\leq 0}{sup}\{\|e(z)\|, \|\dot e(z)\|\} \end{equation} (13)

    where \alpha, \beta are the exponential decay coefficient and decay rate.

    Lemma 1. [32] There exist matrix G, H and the time-varying matrix Z(t) with appropriate dimensions, for given time-varying matrix B(t)\in [\underline{B}, \bar B] , and \underline{B}\in\mathbb{R}^{n\times n} , \bar B\in\mathbb{R}^{n\times n} , such that

    \begin{equation} B(t) = \frac{1}{2}(\underline{B}+\bar B)+GZ(t)E \end{equation} (14)

    and Z^T(t)Z(t)\leq I .

    Remark 2. Let,

    \begin{aligned} \tilde{B} = &\left( \dfrac{\underline{b}_{pq}+\bar{b}_{pq}}{2}\right)_{n\times n} = \dfrac{\underline{B}+\bar B}{2}, \quad B^* = (b^*_{pq})_{n\times n} = \left( \dfrac{\bar{b}_{pq}-\underline{b}_{pq}}{2}\right)_{n\times n} = \dfrac{\bar B-\underline{B}}{2}, \\ \tilde{D} = &\left( \dfrac{\underline{d}_{pq}+\bar{d}_{pq}}{2}\right)_{n\times n} = \dfrac{\underline{D}+\bar D}{2}, \quad D^* = (d^*_{pq})_{n\times n} = \left( \dfrac{\bar{d}_{pq}-\underline{d}_{pq}}{2}\right)_{n\times n} = \dfrac{\bar D-\underline{D}}{2}, \\ G^b = &\begin{bmatrix}G^b_1&G^b_2&...&G^b_n\end{bmatrix}, \quad G^d = \begin{bmatrix}G^d_1&G^d_2&...&G^d_n\end{bmatrix}, \\ G^b_p = &\begin{bmatrix}0_{p-1, n}\\(b^*_{p1})^\omega, (b^*_{p2})^\omega, ..., (b^*_{pn})^\omega\\0_{n-1, n}\end{bmatrix}, \quad G^d_p = \begin{bmatrix}0_{p-1, n}\\(d^*_{p1})^\omega, (d^*_{p2})^\omega, ..., (d^*_{pn})^\omega\\0_{n-1, n}\end{bmatrix}, \omega\in [0, 1], \\ E^b = &\begin{bmatrix}E^b_1&E^b_2&...&E^b_n\end{bmatrix}^T, \quad E^d = \begin{bmatrix}E^d_1&E^d_2&...&E^d_n\end{bmatrix}^T, \\ E^b_p = &diag\{(b^*_{p1})^{1-\omega}, (b^*_{p2})^{1-\omega}, ..., (b^*_{pn})^{1-\omega}\}, \ E^d_p = diag\{(d^*_{p1})^{1-\omega}, (d^*_{p2})^{1-\omega}, ..., (d^*_{pn})^{1-\omega}\}, \\ Z^i(t)& = diag\{Z^i_{11}(t), ...Z^i_{1n}(t), Z^i_{21}(t), ..., Z^i_{2n}(t), ..., Z^i_{n1}(t), ..., Z^i_{nn}(t)\}, \ i = 1, 2, 3, 4, \end{aligned}

    according to Lemma 1, we have B(x(t)) = \tilde{B}+G^bZ^1(t)E^b , D(x(t)) = \tilde{D}+G^dZ^2(t)E^d , B(y(t)) = \tilde{B}+G^bZ^3(t)E^b , D(y(t)) = \tilde{D}+G^dZ^4(t)E^d , and (Z^i(t))^TZ^i(t)\leq I .

    Then, the error system (11) can be written in the following form:

    \begin{equation} \begin{aligned} \dot{e}(t) = & -Ke(t)+(\tilde{D}+G^dZ^4(t)E^d)f(e(t-h(t)))\\ &+(\tilde{B}+G^bZ^3(t)E^b)f(e(t))+N(t)\\ &+(-\tilde{K}e(t)-Csgn(v_1e(t)+v_2\dot e(t))) \end{aligned} \end{equation} (15)

    where N(t) = (G^b(Z^3(t)-Z^1(t))E^b)l(x(t))+(G^d(Z^4(t)-Z^2(t))E^d)l(x(t-h(t))) .

    Lemma 2. [33] For a symmetric matrix A = \begin{bmatrix}A_{11}&A_{12}\\ * &A_{22} \end{bmatrix} , where A_{11}\in\mathbb{R}^{n\times n} , the following conditions are equivalent:

    \begin{equation} \begin{split} &(1) A < 0 ;\\ &(2) A_{11} \leq 0, A_{22}-A_{12}^TA_{11}^{-1}A_{12} \leq 0 ; \\ &(3) A_{22} \leq 0, A_{11}-A_{12}A_{11}^{-1}A_{12}^T \leq 0 . \end{split} \end{equation} (16)

    We introduce two lemmas that are critical to the results of this paper.

    Lemma 3. [26] (Canonical Bessel-Legendre inequalities)For any n-dimensional positive definite matrix P (P\in\mathbb{R}^{n\times n}) , any positive integer N\geq0 , a_1 < a_2 , and e\in\mathscr{L}_2([a_1, a_2] \rightarrow \mathbb{R}^n) , the inequality

    \begin{equation} \int_{a_1}^{a_2}\dot e^T(s)P\dot e(s)\geq \dfrac{1}{a_2-a_1}\xi^T\Pi^T_{2N}\Pi^T_{1N}\overline{P}_N\Pi_{1N}\Pi_{2N}\xi \end{equation} (17)

    holds, where

    \begin{split} \overline{P}_N& = \ \mathrm{diag}\{P, 3P, ... , (2N+1)P\}\\ \Pi_{1N}& = \begin{bmatrix} I & 0&0&\cdots&0 \\ (-1)^1I&(-1)^1p^1_1&0 &\cdots & 0\\ (-1)^2I&(-1)^2p^2_1I&(-1)^2p^2_2I& \cdots & 0\\ \vdots&\vdots&\vdots&\vdots&\vdots\\ (-1)^NI&(-1)^Np^N_1I&(-1)^Np^N_2I& \cdots & (-1)^Np^N_NI\\ \end{bmatrix}\\ \Pi_{2N}& = \begin{bmatrix} I & -I&0&\cdots&0 \\ 0&-I&I&\cdots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 0&-I&0&\cdots&NI\\ \end{bmatrix}\\ \end{split}

    with \xi = col\{e(a_2), e(a_1), \dfrac{1}{c_2-c_1}\bar\xi\} , \bar\xi = \{\int_{c_1}^{c_2}e(s)ds, \cdots, \int_{c_1}^{c_2}(\dfrac{c_2-s}{c_2-c_1})^{N-1}e(s)ds\} , and p^i_k = (-1)\binom{i}{k}\binom{i+k}{k} .

    Lemma 4. [34] For \omega_1, \omega_2\in \mathbb{R}^m , \alpha\in (0, 1) , and given m\times m constant real matrice \aleph_1 > 0, \aleph_2 > 0 , the following inequality is satisfied for any Y_1, Y_2\in\mathbb{R}^{m\times m}

    \begin{equation} \begin{split} \dfrac{1}{\alpha}\omega_1^T\aleph_1\omega_1&+\dfrac{1}{1-\alpha}\omega_2^T\aleph_2\omega_2\geq \\ &\omega_1^T[\aleph_1+(1-\alpha)(\aleph_1-Y_1\aleph_2^{-1}Y^T_1)]\omega_1\\ &+\omega_2^T[\aleph_2+\alpha(\aleph_2-Y_2\aleph_1^{-1}Y^T_2)]\omega_2\\ &+2\omega_1^T[\alpha Y_1+(1-\alpha)Y_2]\omega_2 \end{split} \end{equation} (18)

    Lemma 5. [35] For any vector x, y\in \mathbb{R}^n , matrices G, E and Z which are real matrices of appropriate dimensions, Z^T(t)Z(t)\leq I , and scalar \varepsilon > 0 , the following inequality holds:

    \begin{equation} 2x^TDZEy\leq \varepsilon^{-1}x^TDD^Tx+\varepsilon y^TE^TEy \end{equation} (19)

    Lemma 6. [36] For a matrix R\in \mathbb{S}^n_+ , scalars m and n with m < n , and vector x , the following inequality holds

    \begin{equation} \begin{split} \frac{(n-m)^2}{2}\int_{m}^{n}\int_{\theta}^{n}x^T(s)Rx(s)dsd\theta \geq \left(\int_{m}^{n}\int_{\theta}^{n}x(s)dsd\theta\right)^TR\left(\int_{m}^{n}\int_{\theta}^{n}x(s)dsd\theta\right) \end{split} \end{equation} (20)

    To simplify the processing of problems, the following terms for vectors and matrices are defined as

    \begin{equation} \begin{split} \xi^T(t)& = \left [e^T(t)\quad e^T(t-h(t))\quad e^T(t-h)\quad \dot e^T(t)\quad \dot e^T(t-h(t))\right .\\ &\qquad\left .\dot e^T(t-h)\quad f(e(t))\quad f(e(t-h(t)))\quad f(e(t-h))\quad \zeta^T_{N}(t)\right. ], \\ \zeta_{N}(t)& = col\{\rho_0(t), \beta_0(t), \varpi_0(t), \iota_0(t), ..., \rho_{N-1}(t), \beta_{N-1}(t), \varpi_{N-1}(t), \iota_{N-1}(t)\}, \\ \rho_i(t)& = \dfrac{1}{h^i(t)}\int_{t-h(t)}^{t}(t-s)^ie(s)ds, \quad \beta_i(t) = \dfrac{1}{h^{i+1}(t)}\int_{t-h(t)}^{t}(t-s)^ie(s)ds, \\ \varpi_i(t)& = \dfrac{1}{(h-h(t))^i}\int_{t-h}^{t-h(t)}(t-h(t)-s)^ie(s)ds, \\ \iota_i(t)& = \dfrac{1}{(h-h(t))^{i+1}}\int_{t-h}^{t-h(t)}(t-h(t)-s)^ie(s)ds, \quad i = 0, 1, ..., N-1\\ S_1& = col\{-K^-, I\}, \qquad S_2 = \{K^+, -I\}, \\ K_1& = diag\{\sigma^-_1, ... , \sigma^-_n\}, \quad K_2 = diag\{\sigma^+_1, ... , \sigma^+_n\}, \\ a_{p}& = \begin{bmatrix} 0_{q\times(p-1)q}&I_q&0_{q\times(4N+9-p)q} \end{bmatrix}. \end{split} \end{equation} (21)

    Theorem 1. For given scalars h > 0 , v_{1}\geq0 , v_{2}\geq0 , \mu , \omega\geq0 , N\in\mathbb{N} . When the error system (15) under the control law (12) is stable at the attenuation rate exponential, this ensures exponential synchronization between the primary and secondary systems, if there exist matrices P > 0 , Q_i > 0, i = 1, 2 , Q_{iN} > 0, i = 3, 4, 5 , U > 0, U_{1} > 0 , diagonal matrices \bar{H} > 0 , \Lambda_i > 0, i = 1, 2, 3 , \Upsilon_i > 0, i = 1, 2, 3 , \mathcal{L}_i > 0, i = 1, 2, 3 and appropriate dimensional matrices \tilde{H} , Y_{1N}, Y_{2N} and \Gamma_N such that

    \begin{array} \Xi_N(0, \mu)|_{\mu = \mu_1, \mu_2} = \left [\begin{array}{c c c c} \tilde{\Xi}_{N}(0, \mu)&(v_1a_{1}+v_2a_{4})^T\bar{H}G^b&(v_1a_{1}+v_2a_{4})^T\bar{H}G^d&\mathcal{F}^T_{9N}\Pi^T_{2N}\Pi^T_{1N}Y_{1N}\\ * & -\varepsilon_1I_n & 0&0\\ * & * &-\varepsilon_2I_n&0\\ * & * & *&-\bar U_{N} \end{array}\right] < 0 \end{array} (22)
    \begin{array} \Xi_N(h, \mu)|_{\mu = \mu_1, \mu_2} = \left [\begin{array}{c c c c} \tilde{\Xi}_{N}(h, \mu)&(v_1a_{1}+v_2a_{4})^T\bar{H}G^b&(v_1a_{1}+v_2a_{4})^T\bar{F}G^d&\mathcal{F}^T_{10N}\Pi^T_{2N}\Pi^T_{1N}Y_{2N}\\ * & -\varepsilon_1I_n & 0&0\\ * & * &-\varepsilon_2I_n&0\\ * & * & *&-\bar U_{N} \end{array}\right] < 0 \end{array} (23)

    In addition, the expected controller gain is as follows

    \begin{equation} \tilde{K} = \bar{H}^{-1}\tilde{H} \end{equation} (24)
    \begin{equation} C = diag\{c_1, c_2, c_3..., c_p\}, \quad c_i = 2\sum\limits_{q = 1}^{p}(b^*_{pq}+d^*_{pq}m_q) \end{equation} (25)

    where

    \begin{equation} \begin{split} \tilde{\Xi}_N(h(t), \dot{h}(t)) = &\sum\limits_{i = 1}^{5}\Phi_{i}+\tilde{\Phi}_{1}(0)+\tilde{\Phi}_{2}(0)+He\{\Gamma_N^T\mathcal{X}_{11N}\}\\ &+\tilde{\Phi}_{2}(0)+He\{[v_1a_1+v_2a_4]^T[\bar{H}(-a_4-Ka_1)-\tilde{H}a_1]\}\\ &+\mathcal{D}^T_6(S_1\mathcal{L}_1S_2^T+S_2\mathcal{L}_1S^T_1)\mathcal{D}_6+\mathcal{D}^T_7(S_1\mathcal{L}_2S_2^T+S_2\mathcal{L}_2S^T_1)\mathcal{D}_7\\ &+\mathcal{D}^T_8(S_1\mathcal{L}_3S_2^T+S_2\mathcal{L}_3S^T_1)\mathcal{D}_8-\frac{2}{h^2}\mathcal{D}_9^TU_1\mathcal{D}_9\\ \Phi_{1} = &\lambda\mathcal{D}^T_1P\mathcal{D}_1+2\mathcal{D}^T_1P\mathcal{D}_2\\ \Phi_{2} = &e^{\lambda(t+h)}\mathcal{D}^T_3Q_1\mathcal{D}_3-e^{\lambda t}\mathcal{D}^T_5Q_2\mathcal{D}_5+(1-\dot{h}(t))e^{\lambda(t+h-h(t)}\mathcal{D}^T_4(Q_2-Q_1)\mathcal{D}_4\\ \Phi_{3} = &e^{\lambda h}\mathcal{F}^T_{1N}Q_{3N}\mathcal{F}_{1N}-\mathcal{F}^T_{4N}Q_{4N}\mathcal{F}_{4N}-(1-\dot{h}(t))e^{\lambda (h-h(t))}\mathcal{F}^T_{2N}Q_{3N}\mathcal{F}_{2N}\\ &+(1-\dot{h}(t))e^{\lambda(h-h(t))}\mathcal{F}^T_{3N}Q_{4N}\mathcal{F}_{3N}+e^{\lambda h}\mathcal{F}^T_{5N}Q_{5N}\mathcal{F}_{5N}-\mathcal{F}^T_{6N}Q_{5N}\mathcal{F}_{6N}\\ &+He\{\mathcal{F}^T_{7N}Q_{3N}\mathcal{F}_{8N}+\mathcal{F}^T_{9N}Q_{4N}\mathcal{F}_{10N}\}+\mathcal{F}^T_{11N}Q_{5N}\mathcal{F}_{12N}\\ \Phi_{4} = &h^2e^{\lambda h}a_{4}^T(U+\frac{U_1}{2})a_{4}+\frac{h^2}{2}e^{\lambda h}a_{4}^TU_1a_{4}\\ \Phi_{5} = &2[a_1(\Lambda_1K^+-\Upsilon_1K^-)+a7(\Upsilon_1-\Lambda_1)]a_4\\ &+2[a3(\Lambda_3K^+-\Upsilon_3K^-)+a9(\Upsilon_3-\Lambda_3)]a_6\\ &+2(1-\dot{h}(t))[a2(\Lambda_2K^+-\Upsilon_2K^-)+a8(\Upsilon_2-\Lambda_2)]a_5\\ \tilde{\Phi}_{1}(\kappa)& = 2(v_1a_{1}+v_2a_{4})^T\bar{H}\tilde{B}a_{7}+\varepsilon_1a^T_{7}(E^b)^TE^ba_{7}\\ &+\frac{\kappa}{\varepsilon_1}(v_1a_{1}+v_2a_{4})^T\bar{H}G^b(G^b)^T\bar{H}^T(v_1a_{1}+v_2a_{4})\\ \tilde{\Phi}_{2}(\kappa)& = 2(v_1a_{1}+v_2a_{4})^T\bar{H}\tilde{D}a_{8}+\varepsilon_2a^T_{8}(E^d)^TE^da_{8}\\ &+\frac{\kappa}{\varepsilon_2}(v_1a_{1}+v_2a_{4})^T\bar{H}G^d(G^d)^T\bar{H}^T(v_1a_{1}+v_2a_{4})\\ \mathcal{D}_1 = &col\{a_1, a_2, a_3, a_{10}, a_{12}\}, \ \mathcal{D}_2 = col\{a_4, a_5, a_6, a_1-a_2, a_2-a_3\}\\ \mathcal{D}_3 = &col\{a_1, a_4, a_7\}, \ \mathcal{D}_4 = col\{a_2, a_5, a_8\}, \ \mathcal{D}_5 = col\{a_3, a_6, a_9\}, \ \mathcal{D}_6 = col\{a_1, a_7\}\\ \mathcal{D}_7 = &col\{a_2, a_8\}, \ \mathcal{D}_8 = col\{a_3, a_9\}, \ \mathcal{D}_9 = ha_1-a_{10}-a_{12}\\ \mathcal{F}_{1N} = &col\{a_{4}, a_{1}, a_{1}, a_{2}, a_{3}, a_{10}, a_{14}, ..., a_{4N+6}\}\\ \mathcal{F}_{2N} = &col\{a_{5}, a_{2}, a_{1}, a_{2}, a_{3}, a_{10}, a_{14}, ..., a_{4N+6}\}\\ \mathcal{F}_{3N} = &col\{a_{5}, a_{2}, a_{1}, a_{2}, a_{3}, a_{12}, a_{16}, ..., a_{4N+8}\}\\ \mathcal{F}_{4N} = &col\{a_{6}, a_{3}, a_{1}, a_{2}, a_{3}, a_{12}, a_{16}, ..., a_{4N+8}\}\\ \mathcal{F}_{5N} = &col\{a_{4}, a_{1}, a_{3}, a_{10}, a_{12}, ..., a_{4N+6}, a_{4N+8}\}\\ \mathcal{F}_{6N} = &col\{a_{6}, a_{1}, a_{3}, a_{10}, a_{12}, ..., a_{4N+6}, a_{4N+8}\}\\ \mathcal{F}_{7N} = &col\{0, 0, a_{4}, (1-\dot h(t))a_{5}, a_{6}, h_{70}..., h_{7(N-1)}\} \end{split} \end{equation} (26)
    \begin{equation} \begin{split} h_{70} = &a_{1}-(1-\dot h(t))a_{2}\\ h_{7i} = &-i\dot h(t)a_{4i+11}-(1-\dot h(t))a_{2}+ia_{4i+7}, \ i = 1, ..., N-1\\ \mathcal{F}_{8N} = &col\{a_{1}-a_{2}, a_{10}, h(t)a_{1}, h(t)a_{2}, h(t)a_{3}, h(t)a_{10}, h(t)a_{11}, ..., h(t)a_{4N+6}\}\\ \mathcal{F}_{9N} = &col\{0, 0, a_{4}, (1-\dot h(t))a_{5}, a_{6}, h_{90}, ..., h_{9(N-1)}\}\\ h_{90} = &(1-\dot h(t))a_{2}-a_{3}\\ h_{9i} = &i\dot h(t)a_{4i+13}-a_{3}+i(1-\dot h(t))a_{4i+9}, \ i = 1, ..., N-1\\ \mathcal{F}_{10N}& = col\{ a_{2}-a_{3}, a_{12}, (h-h(t))a_{1}, (h-h(t))a_{2}, \\ &(h-h(t))a_{3}, (h-h(t))a_{12}, (h-h(t))a_{16}, ..., (h-h(t))a_{4N+8}\} \\ \mathcal{F}_{11N}& = col\{0, a_4, a_6, a_1-(1-\dot h(t))*a_2, (1-\dot h(t))*a_2-a_3, ..., h_{70}, h_{90}\}\\ \mathcal{F}_{12N}& = col\{a1-a3, h*a_1, h*a_3, h*a_{10}, h*a_{12}, ..., h*a_{4N+6}, h*a_{4N+8}\}\\ \mathcal{F}_{13N}& = col\{a_{1}, a_{2}, a_{11}, a_{15}, ..., a_{4N+7}\}, \ \mathcal{F}_{14N} = col\{a_{2}, a_{3}, a_{13}, a_{17}, ..., a_{4N+9}\}\\ \mathcal{F}_{15N}& = col\{a_{10}-h(t)a_{11}, a_{12}-h(t)a_{13}, ..., a_{4N+6}-h(t)a_{4N+7}, a_{4N+8}-h(t)a_{4N+9}\}\\ \bar U_{N}& = diag\{U, 3U, ..., (2N+1)U\}\notag \end{split} \end{equation}

    Proof. At first, the following augmented LKF candidate function is constructed:

    \begin{equation} \begin{split} V(t, e(t))& = \sum\limits_{i = 1}^{5}V_i(t, e(t)) \end{split} \end{equation} (27)

    where

    \begin{equation} \begin{split} V_1(t, e(t)) = &e^{\lambda t}\varphi^T(t)P\varphi(t)\\ V_2(t, e(t)) = &\int_{t-h(t)}^{t}e^{\lambda(s+h)}\varphi^T_1(s)Q_1 \varphi_1(s)ds+\int_{t-h}^{t-h(t)}e^{\lambda(s+h)}\varphi_1^T(s)Q_2\varphi_1(s)ds\\ V_3(t, e(t)) = &\int_{t-h(t)}^{t}e^{\lambda(s+h)}\varphi^T_2(t, s)Q_{3N} \varphi_2(t, s)ds+\int_{t-h}^{t-h(t)}e^{\lambda(s+h)}\varphi^T_3(t, s)Q_{4N} \varphi_3(t, s)ds\\ &+\int_{t-h}^{t}e^{\lambda(s+h)}\varphi^T_4(t, s)Q_{5N} \varphi_4(t, s)ds\\ V_4(t, e(t)) = &h\int_{-h}^{0}\int_{t+\theta}^{t}e^{\lambda(s+h)}\dot{e}^T(s)U\dot{e}(s)dsd\theta+\int_{t-h}^{t}\int_{\theta}^{t}\int_{u}^{t}e^{\lambda(s+h)}\dot{e}^T(s)U_1\dot{e}(s)dsdud\theta\\ V_5(t, e(t)) = &2e^{\lambda t}\sum\limits_{p = 1}^{n}\int_{0}^{e_p(t)}(\Lambda_{1p}(\sigma^+_ps-f_p(s))+\Upsilon_{1p}(f_p(s)-\sigma^-_ps))ds\\ &+2e^{\lambda t}\sum\limits_{p = 1}^{n}\int_{0}^{e_p(t-h(t))}(\Lambda_{2p}(\sigma^+_ps-f_p(s))+\Upsilon_{2p}(f_p(s)-\sigma^-_ps))ds\\ &+2e^{\lambda t}\sum\limits_{p = 1}^{n}\int_{0}^{e_p(t-h)}(\Lambda_{3p}(\sigma^+_ps-f_p(s))+\Upsilon_{3p}(f_p(s)-\sigma^-_ps))ds\\ \end{split} \end{equation} (28)

    and

    \begin{equation} \begin{split} &\varphi(t) = col\left\lbrace e(t), e(t-h(t)), e(t-h), \int_{t-h(t)}^{t}e(s)ds, \int_{t-h}^{t-h(t)}e(s)ds\right\rbrace \\ &\varphi_1(t) = col\{e(t), \dot{e}(t), f(e(t))\} \\ &\varphi_2(t, s) = col\{\dot e(s), e(s), e(t), e(t-h(t)), e(t-h), \phi^1_N(t)\}\\ &\varphi_3(t, s) = col\{\dot e(s), e(s), e(t), e(t-h(t)), e(t-h), \phi^2_N(t)\}\\ &\varphi_4(t, s) = col\{\dot e(s), e(t), e(t-h), \phi^1_N(t), \phi^2_N(t)\}\\ &\phi^1_N(t) = col\{\rho_0(t), \rho_1(t), ..., \rho_{N-1}(t)\}\\ &\phi^2_N(t) = col\{\varpi_0(t), \varpi_1(t), ..., \varpi_{N-1}(t)\}\\ \end{split} \end{equation} (29)

    Via differentiating Eq (27) along the trajectory of the system, we have

    \begin{equation} \begin{split} \dot{V}_1(t, e(t)) = &e^{\lambda t}\xi^T(t)\Phi_{1}\xi(t) \\ \dot{V}_2(t, e(t)) = &e^{\lambda(t+h)}\varphi_1^T(t)Q_1\varphi_1(t)-e^{\lambda t}\varphi_1^T(t-h)Q_2\varphi_1(t-h)\\ &-(1-\dot{h}(t))e^{\lambda(t+h-h(t)}\varphi_1^T(t-h(t))Q_1\varphi_1(t-h(t))\\ &+(1-\dot{h}(t))e^{\lambda(t+h-h(t)}\varphi_1^T(t-h(t))Q_2\varphi_1(t-h(t))\\ = &\xi^T(t)\Phi_{2}\xi(t)\\ \dot{V}_3(t, e(t)) = &e^{\lambda(t+h)}\varphi^T_2(t, t)Q_{3N}\varphi_2(t, t)-e^{\lambda t}\varphi^T_3(t, t-h)Q_{4N}\varphi_3(t, t-h)\\ &+e^{\lambda(t+h)}\varphi^T_4(t, t)Q_{5N}\varphi_4(t, t)-e^{\lambda t}\varphi^T_4(t, t-h)Q_{5N}\varphi_4(t, t-h) \\&-e^{\lambda(t+h-h(t))}\varphi^T_2(t, t-h(t))Q_{3N}\varphi_2(t, t-h(t)) \\&+\dot{h}(t)e^{\lambda(t+h-h(t))}\varphi^T_2(t, t-h(t))Q_{3N}\varphi_2(t, t-h(t)) \\&+e^{\lambda(t+h-h(t))}\varphi^T_3(t, t-h(t))Q_{4N}\varphi_3(t, t-h(t)) \\&-\dot{h}(t)e^{\lambda(t+h-h(t))}\varphi^T_3(t, t-h(t))Q_{4N}\varphi_3(t, t-h(t)) \\&+2\int_{t-h(t)}^{t}e^{\lambda(s+h)}\varphi^T_2(t, s)Q_{3N}\dfrac{\partial\varphi_2(t, s)}{\partial t}ds\\ &+2\int_{t-h}^{t-h(t)}e^{\lambda(s+h)}\varphi^T_3(t, s)Q_{4N}\dfrac{\partial\varphi_3(t, s)}{\partial t}ds\\ &+2\int_{t-h}^{t}e^{\lambda(s+h)}\varphi^T_4(t, s)Q_{5N}\dfrac{\partial\varphi_4(t, s)}{\partial t}ds\\ \leq &e^{\lambda t}\xi^T(t)\Phi_{3}\xi(t) \end{split} \end{equation} (30)

    Combining with the Eq (29), we can get

    \begin{equation*} \begin{split} &\varphi_2(t, t) = \mathcal{F}_{1N}\xi(t), \quad\varphi_2(t, t-h(t)) = \mathcal{F}_{2N}\xi(t)\\ &\varphi_3(t, t-h(t)) = \mathcal{F}_{3N}\xi(t), \quad\varphi_3(t, t-h) = \mathcal{F}_{4N}\xi(t)\\ &\varphi_4(t, t) = \mathcal{F}_{5N}\xi(t), \quad\varphi_4(t, t-h) = \mathcal{F}_{6N}\xi(t) \end{split} \end{equation*}
    \begin{equation*} \begin{split} &\dfrac{\partial\varphi_2(t, s)}{\partial t} = \mathcal{F}_{7N}\xi(t), \ \int_{t-h(t)}^{t}\varphi_2(t, s)ds = \mathcal{F}_{8N}\xi(t)\\ &\dfrac{\partial\varphi_3(t, s)}{\partial t} = \mathcal{F}_{9N}\xi(t), \ \int_{t-h}^{t-h(t)}\varphi_3(t, s)ds = \mathcal{F}_{10N}\xi(t)\\ &\dfrac{\partial\varphi_4(t, s)}{\partial t} = \mathcal{F}_{11N}\xi(t), \ \int_{t-h}^{t}\varphi_4(t, s)ds = \mathcal{F}_{12N}\xi(t)\\ \end{split} \end{equation*}
    \begin{equation} \begin{split} \dot{V}_4(t, e(t)) \leq& h^2e^{\lambda(t+h)}\dot{e}^T(t)U\dot{e}(t)-he^{\lambda t}\int_{t-h}^{t}\dot{e}^T(s)U\dot{e}(s)ds\\ &+\frac{h^2}{2}e^{\lambda (t+h)}\dot{e}^T(t)U_1\dot{e}(t)-e^{\lambda t}\int_{t-h}^{t}\int_{\theta}^{t}\dot{e}^T(t)U_1\dot{e}(t)dsd\theta\\ = &e^{\lambda t}[\xi^T(t)\Phi_{4}\xi(t)-h\int_{t-h}^{t}\dot{e}^T(s)U\dot{e}(s)ds\quad-\int_{t-h}^{t}\int_{\theta}^{t}\dot{e}^T(t)U_1\dot{e}(t)dsd\theta \end{split} \end{equation} (31)

    Now, using Lemma 3, the integral term of the equation satisfies the following conditions:

    \begin{equation} \begin{split} -h&\int_{t-h(t)}^{t}\dot{e}^T(s)U\dot{e}(s)ds\\ &\leq-\frac{h}{h(t)}\xi^T(t)\mathcal{F}^T_{13N}\Pi^T_{2N}\Pi^T_{1N}\bar U_{N}\Pi_{1N}\Pi_{2N}\mathcal{F}_{13N}\xi(t)-h\int_{t-h}^{t-h(t)}\dot{e}^T(s)U\dot{e}(s)ds\\ &\leq-\frac{h}{h-h(t)}\xi^T(t)\mathcal{F}^T_{14N}\Pi^T_{2N}\Pi^T_{1N}\bar U_{N}\Pi_{1N}\Pi_{2N}\mathcal{F}_{14N}\xi(t) \end{split} \end{equation} (32)

    Using Lemma 4, the following inequalities can be further calculated

    \begin{equation} \begin{split} -h&\int_{t-h(t)}^{t}\dot{e}^T(s)U\dot{e}(s)ds-h\int_{t-h}^{t-h(t)}\dot{e}^T(s)U\dot{e}(s)ds\\ &\leq\xi^T(t)\{[(1-\alpha)\mathcal{F}^T_{13N}\Pi^T_{2N}\Pi^T_{1N}Y_{1N}\bar U^{-1}_{N}Y^T_{1N}\Pi_{1N}\Pi_{2N}\mathcal{F}_{13N}\\ &\quad+\alpha\mathcal{F}^T_{14N}\Pi^T_{2N}\Pi^T_{1N}Y_{2N}\bar U^{-1}_{N}Y^T_{2N}\Pi_{1N}\Pi_{2N}\mathcal{F}_{14N}]\\ &\quad-[(2-\alpha)\mathcal{F}^T_{13N}\Pi^T_{2N}\Pi^T_{1N}\bar U_{N}\Pi_{1N}\Pi_{2N}\mathcal{F}_{13N}\\ &\quad+(1+\alpha)\mathcal{F}^T_{14N}\Pi^T_{2N}\Pi^T_{1N}\bar U_{N}\Pi_{1N}\Pi_{2N}\mathcal{F}_{14N}\\ &\quad+2\mathcal{F}^T_{13N}\Pi^T_{2N}\Pi^T_{1N}(\alpha Y_{1N}+(1-\alpha)Y_{2N})\Pi_{1N}\Pi_{2N}\mathcal{F}_{14N}]\}\xi(t) \end{split} \end{equation} (33)

    Use Lemma 6 to deal with the following double integrals

    \begin{equation} \begin{split} -\int_{t-h}^{t}\int_{\theta}^{t}\dot{e}^T(t)U_1\dot{e}(t)dsd\theta &\leq -\frac{2}{h^2}\left( \int_{t-h}^{t}\int_{\theta}^{t}\dot{e}(t)dsd\theta\right)^TU_1\left( \int_{t-h}^{t}\int_{\theta}^{t}\dot{e}(t)dsd\theta\right)\\ & = \xi(t)(-\frac{2}{h^2}\mathcal{D}_9^TU_1\mathcal{D}_9)\xi(t) \end{split} \end{equation} (34)
    \begin{equation} \begin{split} \dot{V}_5(t, e(t)) = &2e^{\lambda t}[e^T(t)(\Lambda_1K^+-\Upsilon_1K^-)+f^T(t)(\Upsilon_1-\Lambda_1)]\dot{e}(t)\\ &+2e^{\lambda t}(1-\dot{h}(t))[e^T(t-h(t))(\Lambda_2K^+-\Upsilon_2K^-)\\ &+f^T(t-h(t))(\Upsilon_2-\Lambda_2)]\dot{e}(t-h(t))\\ &+2e^{\lambda t}[e^T(t-h)(\Lambda_3K^+-\Upsilon_3K^-)\\ &+f^T(t-h)(\Upsilon_3-\Lambda_3)]\dot{e}(t-h)\\ = &e^{\lambda t}\xi^T(t)\Phi_{5}\xi(t)\\ \end{split} \end{equation} (35)

    Based on system (15), for any positive diagonal matrix \bar{H} , there are:

    \begin{equation} \begin{split} 0 = &e^{\lambda t}2[v_1e(t)+v_2\dot e(t)]^T\bar{H}\{-\dot e(t)-Ke(t)\\ &+(\tilde{D}+G^dZ^4(t)E^d)f(e(t-h(t)))\\ &+(\tilde{B}+G^bZ^3(t)E^b)f(e(t)))+N(t)\\ &+[-\tilde{K}(e(t))-Csgn(v_1e(t)+v_2\dot e(t))]\} \end{split} \end{equation} (36)

    The equation mentioned above can also be written as

    \begin{equation} \begin{split} 0 = &e^{\lambda t}2[v_1e(t)+v_2\dot e(t)]^T\bar{H}[-\dot e(t)-Ke(t)\\ &+(\tilde{D}+G^dZ^4(t)E^d)f(e(t-h(t)))\\ &+(\tilde{B}+G^bZ^3(t)E^b)f(e(t))]-2[v_1e(t)+v_2\dot e(t)]^T\tilde{H}e(t)\\ &+2[v_1e(t)+v_2\dot e(t)]^T\bar{H}[N(t)-Csgn(v_1e(t)+v_2\dot e(t))] \end{split} \end{equation} (37)

    where \tilde{H} = \bar{H}\tilde{K} .

    Form Lemma 5, for any scalar \varepsilon_1 > 0 , \varepsilon_2 > 0 , we have

    \begin{equation} \begin{split} 2e^{\lambda t}[v_1e(t)+&v_2\dot e(t)]^T\bar{H}(\tilde{B}+G^bZ^3(t)E^b)f(e(t))\\ \leq& e^{\lambda t} \{2[v_1e(t)+v_2\dot e(t)]^T\bar{H}\tilde{B}f(e(t))+\varepsilon_1f^T(e(t))(E^b)^TE^bf(e(t))\\ &+\frac{1}{\varepsilon_1}[v_1e(t)+v_2\dot e(t)]^T\bar{H}G^b(G^b)^T\bar{H}^T[v_1e(t)+v_2\dot e(t)]\}\\ = &e^{\lambda t}\xi^T(t)\tilde{\Phi}_{1}(1)\xi(t)\\ \end{split} \end{equation} (38)
    \begin{equation} \begin{split} 2e^{\lambda t}[v_1e(t)+&v_2\dot e(t)]^T\bar{H}(\tilde{D}+G^dZ^4(t)E^d)f(e(t-h(t)))\\ \leq& e^{\lambda t}\{2[v_1e(t)+v_2\dot e(t)]^T\bar{H}\tilde{D}f(e(t-h(t))\\ &+\varepsilon_2f^T(e(t-h(t))(E^d)^TE^df(e(t-h(t))\\ &+\frac{1}{\varepsilon_2}[v_1e(t)+v_2\dot e(t)]^T\bar{H}G^d(G^d)^T\bar{H}^T[v_1e(t)+v_2\dot e(t)]\}\\ = &e^{\lambda t}\xi^T(t)\tilde{\Phi}_{2}(1)\xi(t)\\ \end{split} \end{equation} (39)

    Form Assumption 1 and Eq (25),

    \begin{equation} \begin{split} 2e^{\lambda t}[v_1e(t)&+v_2\dot e(t)]^T\bar{H}[N(t)-Csgn(v_1e(t)+v_2\dot e(t))]\\ &\leq 2e^{\lambda t}\sum\limits_{p = 1}^{n}|v_1e_p(t)+v_2\dot e_p(t)|\bar{H}_p[|N_p(t)|-c_p]\\ &\leq 2e^{\lambda t}\sum\limits_{p = 1}^{n}|v_1e_p(t)+v_2\dot e_p(t)|\bar{H}_p\left[ 2\sum\limits_{q = 1}^{n}(b^*_{pq}+d^*_{pq})m_q-c_p\right] \\ & = 0 \end{split} \end{equation} (40)

    On the other hand, from Assumption 2, we can get

    \begin{equation} (f_p(e_p(t))-\sigma^+_pe_p(t))(f_p(e_p(t))-\sigma^-_pe_p(t))\leq 0, p = 1, 2, ..., n \end{equation} (41)

    which is equivalent to

    \begin{equation} \tilde{\varphi}(t) = \mathcal{D}^T_6 \begin{bmatrix} -2\sigma^+_p\sigma^-_p\varrho_p\varrho_p^T&(\sigma^+_p+\sigma^-_p)\varrho_p\varrho_p^T\\(\sigma^+_p+\sigma^-_p)\varrho_p\varrho_p^T&-2\varrho_p\varrho_p^T \end{bmatrix} \mathcal{D}^T_6\geq 0, \quad p = 1, 2, ..., n \end{equation} (42)

    where \varrho_p = col\{0, ...0, 1_p, 0, ..., 0\} . So you can get

    \begin{equation} \begin{split} e^{\lambda t}\sum\limits_{p = 1}^{n}\mathcal{L}_{1p}\tilde{\varphi}(t) = e^{\lambda t}\xi^T(t)\mathcal{D}^T_6(S_1\mathcal{L}_1S_2^T+S_2\mathcal{L}_1S^T_1)\mathcal{D}_6\xi(t)\geq 0 \end{split} \end{equation} (43)

    Similarly, we can get the following inequality

    \begin{equation} \begin{split} &e^{\lambda t}\xi^T(t)\mathcal{D}^T_7(S_1\mathcal{L}_2S_2^T+S_2\mathcal{L}_2S^T_1)\mathcal{D}_7\xi(t)\geq 0\\ &e^{\lambda t}\xi^T(t)\mathcal{D}^T_8(S_1\mathcal{L}_3S_2^T+S_2\mathcal{L}_3S^T_1)\mathcal{D}_8\xi(t)\geq 0 \end{split} \end{equation} (44)

    Form Eq (21), it is obvious that

    \begin{equation} \begin{split} \rho_p(t) = h(t)\beta_p(t), \ \varpi_p(t) = (h-h(t))\iota_p(t), \; p = 0, ..., N-1 \end{split} \end{equation} (45)

    Then for any appropriate dimension \Gamma_N ,

    \begin{equation} \begin{split} 2e^{\lambda t}\xi^T(t)\Gamma_N^T\mathcal{F}_{15N}\xi(t) = 0\\ \end{split} \end{equation} (46)

    Combined with the above analysis, the following conclusions can be drawn

    \begin{equation} \begin{split} \dot{V}(t)\leq e^{\lambda t}\xi^T(t)\Xi_N(h(t), \dot{h}(t))\xi(t) \end{split} \end{equation} (47)

    where

    \begin{equation} \begin{split} \Xi_N(h(t), \dot{h}(t)) = &(1-\alpha)\mathcal{F}^T_{13N}\Pi^T_{2N}\Pi^T_{1N}Y_{1N}\bar U^{-1}_{N}Y^T_{1N}\Pi_{1N}\Pi_{2N}\mathcal{F}_{13N}\\ &+\tilde{\Xi}_N(h(t), \dot{h}(t))+\alpha\mathcal{F}^T_{14N}\Pi^T_{2N}\Pi^T_{1N}Y_{2N}\bar U^{-1}_{N}Y^T_{2N}\Pi_{1N}\Pi_{2N}\mathcal{F}_{14N}\\ &+\frac{1}{\varepsilon_1}[v_1e(t)+v_2\dot e(t)]^T\bar{F}G^b(G^b)^T\bar{F}^T[v_1e(t)+v_2\dot e(t)]\\ &+\frac{1}{\varepsilon_2}[v_1e(t)+v_2\dot e(t)]^T\bar{F}G^d(G^d)^T\bar{F}^T[v_1e(t)+v_2\dot e(t)]\\ \notag \end{split} \end{equation}

    Since \Xi_N(h(t), \dot{h}(t)) is linear on both h(t) and \dot{h}(t) , \Xi_N(h(t), \dot{h}(t)) < 0 is satisfied for any (h(t), \dot{h}(t))\in[0, h]\times[\mu_1, \mu_2] if it holds at the four vertices (0, \mu_1), (0, \mu_2), (h, \mu_1), (h, \mu_2) , which is

    \begin{equation} \begin{split} \Xi_N(\tau, \mu)|_{\tau = 0, h, \mu = \mu_1, \mu_2} < 0 \end{split} \end{equation} (48)

    According to the Lemma 2, it can be obtained Eq (22) and (23), and \Xi_N(h(t), \dot{h}(t)) < 0 , so we know from Eq (48) that

    \begin{equation} \begin{split} \dot{V}(t, e(t)) < 0 \end{split} \end{equation} (49)

    In addition, from Eq (27), we have

    \begin{equation} \begin{split} V(0)\leq\Lambda_N\left(\underset{-h\leq\theta\leq 0}{sup}\{\|e(\theta\|, \|\dot{e}(\theta)\|)\}\right) ^2 = \Lambda_N\|\phi\|^2 \end{split} \end{equation} (50)

    where

    \begin{equation*} \begin{split} \Lambda_N = &(3+2h^2)\lambda_{max}(P)+\frac{h^3}{2}e^{\lambda h}\lambda_{max}(U)\\ &+he^{\lambda h}[2\lambda_{max}(Q_1)+\lambda_{max}(\hat{K}Q_1\hat{K})]+he^{\lambda h}[2\lambda_{max}(Q_2)+\lambda_{max}(\hat{K}Q_2\hat{K})]\\ &+he^{\lambda h}\lambda_{max}(Q_{3N})(2+\frac{6}{\lambda}+Nh^2)+he^{\lambda h}\lambda_{max}(Q_{4N})(2+\frac{6}{\lambda}+Nh^2)\\ &+he^{\lambda h}\lambda_{max}(Q_{5N})(1+\frac{4}{\lambda}+2Nh^2)+\frac{h^3}{2}e^{\lambda h}\lambda_{max}(U_1)+2\lambda_{max}[(\Lambda_1+\Upsilon_1)K^+-K^-]\\ &+2\lambda_{max}[(\Lambda_2+\Upsilon_2)K^+-K^-]+2\lambda_{max}[(\Lambda_3+\Upsilon_3)K^+-K^-]\\ \hat{K} = &diag\{max\{|\sigma^+_1|, |\sigma^-_1|\}, max\{|\sigma^+_2|, |\sigma^-_2|\}, ..., max\{|\sigma^+_n|, |\sigma^-_n|\}\} \end{split} \end{equation*}

    By Eq (27), (49), (50) we can obtain that

    \begin{equation} \begin{split} \Lambda_N\|\phi\|^2\geq V(0) \geq V(t) \geq e^{\lambda t}\lambda_{min}(P)\|e(t)\|^2 \end{split} \end{equation} (51)

    Therefore

    \begin{equation*} \begin{split} \|e(t)\|\leq \sqrt{\frac{\Lambda}{\lambda_{min}(P)}}\|\phi\|^{-\lambda t} \end{split} \end{equation*}

    Accoding to Definition 2, system (9) and system (10) is globally exponentially synchronized when the control law is (12). So that's the proof of the Theorem 1.

    To prove the correctness of the theorem, we will give two numerical examples in the next section.

    Example 1. Consider MRNN with the follows parameters: k_1 = k_2 = 1 , l_1 = l_2 = 0 , l_i(x_i(t)) = tanh(x_i(t)), i = 1, 2 where

    \begin{equation*} \begin{split} b_{11}(x_1(t)) = \begin{cases} 1.7, |x_1(t)|\leq2.5\\ 2.3, |x_1(t)|\leq2.5 \end{cases}, \ b_{12}(x_1(t)) = \begin{cases} -2, |x_1(t)|\leq2.5\\ -1.9, |x_1(t)|\leq2.5 \end{cases} \end{split} \end{equation*}
    \begin{equation*} \begin{split} b_{21}(x_2(t)) = \begin{cases} 0.4, |x_2(t)|\leq2.5\\ 0.6, |x_2(t)|\leq2.5 \end{cases}, \ b_{22}(x_2(t)) = \begin{cases} 1.6, |x_2(t)|\leq2.5\\ 2, |x_2(t)|\leq2.5 \end{cases} \end{split} \end{equation*}
    \begin{equation*} \begin{split} d_{11}(x_1(t)) = \begin{cases} -0.5, |x_1(t)|\leq2.5\\ -1.5, |x_1(t)|\leq2.5 \end{cases}, \ d_{12}(x_1(t)) = \begin{cases} 0.1, |x_1(t)|\leq2.5\\ 0.2, |x_1(t)|\leq2.5 \end{cases} \end{split} \end{equation*}
    \begin{equation*} \begin{split} d_{21}(x_2(t)) = \begin{cases} 0.1, |x_2(t)|\leq2.5\\ 0.2, |x_2(t)|\leq2.5 \end{cases}, \ d_{22}(x_2(t)) = \begin{cases} -0.5, |x_2(t)|\leq2.5\\ -1.5, |x_2(t)|\leq2.5 \end{cases} \end{split} \end{equation*}

    The activation functions l_i(x_i(t)) = tanh(x_i(t)) satisfy Assumption 1 and 2 with \sigma^-_1 = \sigma^-_2 = 0 , \sigma^+_1 = \sigma^+_2 = 1 , m_1 = m_2 = 1 . And, we have

    \begin{equation*} \begin{split} \tilde{B} = &\begin{bmatrix} 2.00&-1.95\\0.50&1.80 \end{bmatrix}, \qquad\tilde{D} = \begin{bmatrix} -1.00&0.15\\0.15&-1.00 \end{bmatrix}\\ B^* = &\begin{bmatrix} -0.30&-0.05\\-0.10&-0.20 \end{bmatrix}, \quad D^* = \begin{bmatrix} 0.50&-0.05\\-0.10&0.50 \end{bmatrix} \end{split} \end{equation*}

    As can be seen from Tables 1 and 2, when N = 1 h = 1 , \mu = 0.25 , \lambda = 0.5 , v_2 = 1 and v_1 increases, the upper limit of control gain decreases, indicating that the controller Eq (12) is more flexible. When N = 1 h = 1 , \mu = 0.25 , \lambda = 0.5 , v_1 = 1 , the upper limit of control gain decreases when v_2 decreases. Different controller gain \tilde{K} can be obtained by adjusting v_1, v_2 , and the appropriate controller gain can be selected according to the control requirements. Table 3 shows some comparisons of control gains with different h values. And it's worth noting that [4,37] requires \mu < 1 . Therefore, compared with the existing synchronization standard in [4,37], our results are less conservative.

    Table 1.  Control gains \tilde{K} for N = 1 h = 1 , \mu = 0.25 , \lambda = 0.5 , v_2 = 1 , and various v_1 in Example 1.
    v_1 1 5 10 20
    \tilde{K} \begin{bmatrix}19.0189&-2.0389\\-1.9456&14.6107\end{bmatrix} \begin{bmatrix}7.6689&-1.0340\\-1.0044&6.3736\end{bmatrix} \begin{bmatrix}6.9658&-0.9291\\ -0.9073&5.8214\end{bmatrix} \begin{bmatrix}6.3189&-0.8693\\-0.8604&5.3212\end{bmatrix}

     | Show Table
    DownLoad: CSV
    Table 2.  Control gains \tilde{K} for N = 1 h = 1 , \mu = 0.25 , \lambda = 0.5 , v_1 = 1 , and various v_2 in Example 1.
    v_2 0.1 0.5 0.8
    \tilde{K} \begin{bmatrix}6.7472&-0.9204\\-0.9035&5.6735\end{bmatrix} \begin{bmatrix}11.5958&-1.3825\\-1.3211&9.1989\end{bmatrix} \begin{bmatrix}15.7968&-1.7601\\-1.6789&12.2514\end{bmatrix}

     | Show Table
    DownLoad: CSV
    Table 3.  Control gains \tilde{K} for N = 1 , \mu = 0.25 , \lambda = 0.5 , v_1 = 1 , v2 = 1 and various h in Example 1.
    h 0.1 0.5 0.8
    \tilde{K} \begin{bmatrix}16.1541&-1.6865\\-1.6218&11.9963\end{bmatrix} \begin{bmatrix}17.3840&-1.9725\\-1.8904&13.0817\end{bmatrix} \begin{bmatrix}17.9333&-2.0398\\-1.9414&13.7376\end{bmatrix}

     | Show Table
    DownLoad: CSV

    When the initial values of system (9) and (10) are set as x(t) = [2\quad-1.2]^T , y(t) = [-0.5\quad 0.5]^T , v_1 = 1 , v_2 = 1 , \omega = 0 , N = 1 , the synchronization trajectory of the master and slave system without controller is shown in Figure 1, the synchronization error is shown in Figure 2, and the state response of the master system is shown in Figure 3. From the figure, we can see that the driver system and the response system are non-synchronization.

    Figure 1.  When N = 1 , the state synchronization trajectories of x(t) and y(t) without controller in Example 1.
    Figure 2.  Synchronization error e_{1}(t) , e_{2}(t) , of error system without controller in Example 1.
    Figure 3.  State response trajectory of the drive system in Example 1.

    When the initial values of system (9) and (10) are set to x(t) = [2\quad -1.2]^T , y(t) = [-0.5\quad 0.5]^T , v_1 = 1 , v_2 = 1 , \omega = 0 , N = 1 , the control law (12) is used to obtain the controller gain as show below,

    \begin{equation*} \tilde{K} = \begin{bmatrix}19.0189&-2.0389\\-1.9456&14.6107\end{bmatrix} \end{equation*}

    The synchronous trajectories of the drive system and the response system are shown in Figure 4, the synchronization error of the driving system and the response system is shown in Figure 5.

    Figure 4.  When N = 1 , the synchronization trajectories of x(t) and y(t) under control law (12) in Example 1.
    Figure 5.  When N = 1, synchronization error e_{1}(t) , e_{2}(t) of error system with control law (12) in Example 1.

    As shown in Figure 6, the synchronization time is reduced by comparing the synchronization process when N = 1 in Figure 5 with that when N = 2 in Figure 6. By combining the values given in Figures 5 and 6 and Table 4, it can be seen that the conservatism of the results decreases with the increase of the order of the Bessel-Legendre inequality. Moreover, the synchronization time of the error system is reduced and the performance of the controller is better.

    Figure 6.  When N = 2, synchronization error e_{1}(t) , e_{2}(t) of error system with control law (12) in Example 1.
    Table 4.  Control gains \tilde{K} for h = 1 , \mu = 0.25 , \lambda = 0.5 , v_1 = 1 , v2 = 1 and various N, N = 1, 2, ... in Example 1.
    N 1 2 3 \cdots
    \tilde{K} \begin{bmatrix}19.0189&-2.0389\\-1.9456&14.6107\end{bmatrix} \begin{bmatrix}21.2073&-2.5073\\-2.4192&17.9747\end{bmatrix} \begin{bmatrix}22.8669&-2.4866\\-2.3966&17.6995\end{bmatrix} \cdots

     | Show Table
    DownLoad: CSV

    Example 2. Consider MRNN with the follows parameters: k_1 = k_2 = k_3 = 1 , l_1 = l_2 = l_3 = 0 , l_i(x_i(t)) = tanh(x_i(t)), i = 1, 2, 3 where

    \begin{equation*} \begin{split} b_{11}(x_1(t)) = \begin{cases} 0.2, |x_1(t)|\leq1\\ 0.4, |x_1(t)|\leq1 \end{cases}, b_{12}(x_1(t)) = \begin{cases} -0.1, |x_1(t)|\leq1\\ 0.2, |x_1(t)|\leq1 \end{cases}, b_{13}(x_1(t)) = \begin{cases} 0.4, |x_1(t)|\leq1\\ 0.3, |x_1(t)|\leq1 \end{cases}\\ \end{split} \end{equation*}
    \begin{equation*} \begin{split} b_{21}(x_2(t)) = \begin{cases} 0.12, |x_2(t)|\leq1\\ 0.1, |x_2(t)|\leq1 \end{cases}, b_{22}(x_2(t)) = \begin{cases} 0.1, |x_2(t)|\leq1\\ 0.3, |x_2(t)|\leq1 \end{cases}, b_{23}(x_2(t)) = \begin{cases} 0.2, |x_2(t)|\leq1\\ -0.4, |x_2(t)|\leq1 \end{cases} \end{split} \end{equation*}
    \begin{equation*} \begin{split} b_{31}(x_3(t)) = \begin{cases} 0.2, |x_3(t)|\leq1\\ 0.1, |x_3(t)|\leq1 \end{cases}, b_{32}(x_3(t)) = \begin{cases} 0.3, |x_3(t)|\leq1\\ -0.2, |x_3(t)|\leq1 \end{cases}, b_{33}(x_3(t)) = \begin{cases} 0.1, |x_3(t)|\leq1\\ 0.3, |x_3(t)|\leq1 \end{cases} \end{split} \end{equation*}
    \begin{equation*} \begin{split} d_{11}(x_1(t)) = \begin{cases} -0.2, |x_1(t)|\leq1\\ -0.7, |x_1(t)|\leq1 \end{cases}, d_{12}(x_1(t)) = \begin{cases} 0.1, |x_1(t)|\leq1\\ -0.09, |x_1(t)|\leq1 \end{cases}, d_{13}(x_1(t)) = \begin{cases} -0.1, |x_1(t)|\leq1\\ 0.1, |x_1(t)|\leq1 \end{cases}\\ \end{split} \end{equation*}
    \begin{equation*} \begin{split} d_{21}(x_2(t)) = \begin{cases} -0.1, |x_2(t)|\leq1\\ -0.19, |x_2(t)|\leq1 \end{cases}, d_{22}(x_2(t)) = \begin{cases} -0.5, |x_2(t)|\leq1\\ -1.1, |x_2(t)|\leq1 \end{cases}, d_{23}(x_2(t)) = \begin{cases} 0.2, |x_2(t)|\leq1\\ 0.3, |x_2(t)|\leq1 \end{cases} \end{split} \end{equation*}
    \begin{equation*} \begin{split} d_{31}(x_3(t)) = \begin{cases} 0.2, |x_3(t)|\leq1\\ -0.1, |x_3(t)|\leq1 \end{cases}, d_{32}(x_3(t)) = \begin{cases} -0.2, |x_3(t)|\leq1\\ -0.4, |x_3(t)|\leq1 \end{cases}, d_{33}(x_3(t)) = \begin{cases} -0.3, |x_3(t)|\leq1\\ 0.1, |x_3(t)|\leq1 \end{cases} \end{split} \end{equation*}

    The activation functions l_i(x_i(t)) = tanh(x_i(t)) satisfy Assumption 1 and 2 with \sigma^-_1 = \sigma^-_2 = \sigma^-_3 = 0 , \sigma^+_1 = \sigma^+_2 = \sigma^+_3 = 1 , m_1 = m_2 = m_3 = 1 .And, we have

    \begin{equation*} \begin{split} \tilde{B} = &\begin{bmatrix} 0.30&0.05&0.35\\0.11&0.2&-0.1\\0.15&0.05&0.2 \end{bmatrix}, \qquad \tilde{D} = \begin{bmatrix} -0.45&0.005&0\\-0.145&-0.8&0.25\\0.05&-0.3&-0.1 \end{bmatrix}\\ B^* = &\begin{bmatrix} -0.10&-0.15&0.05\\0.01&-0.1&0.3\\0.05&0.25&-0.1 \end{bmatrix}, \ D^* = \begin{bmatrix} 0.25&0.095&-0.10\\0.045&0.30&-0.05\\0.15&0.1&-0.2 \end{bmatrix} \end{split} \end{equation*}

    Select the initial value x(t) = [0.7\quad-1.2\quad0.3]^T , y(t) = [-0.5 \quad0.5\quad0.7]^T , v_1 = 1 , v_2 = 1 , \omega = 0 , N = 1 . The control gain is obtained as follows

    \begin{equation*} \tilde{K} = \begin{bmatrix}3.6623&0.0597&0.1680\\0.0602&3.7304&0.0209\\0.1670&0.0214&3.5752\end{bmatrix} \end{equation*}

    When N = 1 , Figure 7 shows the synchronization trajectories of x(t) and y(t) in Example 2. Figure 8 shows the status of the error signal e(t) . It is obvious that the error signal state converges exponentially to zero, and the results conform to the conclusion of Theorem 1 in this paper.

    Figure 7.  When N = 1, the synchronization trajectories of x(t) and y(t) in Example 2.
    Figure 8.  When N = 1, synchronization error e_{1}(t) , e_{2}(t) , e_{3}(t) of error system with control in Example 2.

    The exponential synchronization problem of a class of delayed memory neural networks is studied. By using The N-order Bessel-Legendre inequality, the exponential synchronization criterion of n-related memory neural networks is given, and a more flexible intermittent feedback controller is constructed by introducing two adjustable scalars. Finally, the criterion for the conservatism decreasing with the increase of the order of Bessel-Legendre inequality is given. The validity of the main results is verified by two simulation examples.

    The project supported by National Natural Science Foundation of China under Grant No. 51807198, 41906154, 52101358.

    The authors declare that they have no conflicts of interest.



    [1] S. Chen, J. Feng, J. Wang, Y. Zhao, Almost sure exponential synchronization of drive-response stochastic memristive neural networks, Appl. Math. Comput., 383 (2020), 125360. http://dx.doi.org/10.1016/j.amc.2020.125360 doi: 10.1016/j.amc.2020.125360
    [2] X.-S. Yang, D. W. C. Ho, Synchronization of delayed memristive neural networks: robust analysis approach, IEEE Trans. Cybern., 46 (2016), 3377–3387. http://dx.doi.org/10.1109/tcyb.2015.2505903 doi: 10.1109/tcyb.2015.2505903
    [3] H. Liu, Z. Wang, B. Shen, H. Dong, Delay-distribution-dependent H \infty state estimation for discrete-time memristive neural networks with mixed time-delays and fading measurements, IEEE Trans. Cybern., 50 (2020), 440–451. http://dx.doi.org/10.1109/TCYB.2018.2862914 doi: 10.1109/TCYB.2018.2862914
    [4] L. Wang, Y. Shen, Q. Yin, G. Zhang, Adaptive synchronization of memristor-based neural networks with time-varying delays, IEEE Trans. Neural Netw. Learn. Syst., 26 (2015), 2033–2042. http://dx.doi.org/10.1109/tnnls.2014.2361776 doi: 10.1109/tnnls.2014.2361776
    [5] H. Liu, L. Ma, Z. Wang, Y. Liu, F. Alsaadi, An overview of stability analysis and state estimation for memristive neural networks, Neurocomputing, 391 (2020), 1–12. http://dx.doi.org/10.1016/j.neucom.2020.01.066 doi: 10.1016/j.neucom.2020.01.066
    [6] Z. Fei, C. Guan, H. Gao, Exponential synchronization of networked chaotic delayed neural network by a hybrid event trigger scheme, IEEE Trans. Neural Netw. Learn. Syst., 29 (2018), 2558–2567. http://dx.doi.org/10.1109/tnnls.2017.2700321 doi: 10.1109/tnnls.2017.2700321
    [7] S. Jo, T. Chang, I. Ebong, B. Bhadviya, P. Mazumder, W. Lu, Nanoscale memristor device as synapse in meuromorphic systems, Nano Lett.. 10 (2010), 1297–1301. http://dx.doi.org/10.1021/nl904092h doi: 10.1021/nl904092h
    [8] A. Wu, S. Wen, Z. Zeng, Synchronization control of a class of memristor-based recurrent neural networks, Inform. Sciences, 183 (2012), 106–116. http://dx.doi.org/10.1016/j.ins.2011.07.044 doi: 10.1016/j.ins.2011.07.044
    [9] Y. Wang, Y. Cao, Z. Guo, T. Huang, S. Wen, Event-based sliding-mode synchronization of delayed memristive neural networks via continuous/periodic sampling algorithm, Appl. Math. Comput., 383 (2020), 379–383. http://dx.doi.org/10.1016/j.amc.2020.125379 doi: 10.1016/j.amc.2020.125379
    [10] Y. Bao, Y. Zhang, Fixed-time dual-channel event-triggered secure quasi-synchronization of coupled memristive neural networks, J. Franklin Inst., 358 (2021), 10052–10078. http://dx.doi.org/10.1016/j.jfranklin.2021.10.023 doi: 10.1016/j.jfranklin.2021.10.023
    [11] L. Ma, Z. Wang, Y. Liu, F. E. Alsaadi, Exponential stabilization of nonlinear switched systems with distributed time-delay: An average dwell time approach, Eur. J. Control, 37 (2017), 34–42. http://dx.doi.org/10.1016/j.ejcon.2017.05.003 doi: 10.1016/j.ejcon.2017.05.003
    [12] L. Sun, Y. Tang, W. Wang, S. Shen, Stability analysis of time-varying delay neural networks based on new integral inequalitie, J. Franklin Inst., 357 (2020), 10828–10843. http://dx.doi.org/10.1016/j.jfranklin.2020.08.017 doi: 10.1016/j.jfranklin.2020.08.017
    [13] D. Liu, D. Ye, Exponential synchronization of memristive delayed neural networks via event-based impulsive control method, J. Franklin Inst., 357 (2020), 4437–4457. http://dx.doi.org/10.1016/j.jfranklin.2020.03.011 doi: 10.1016/j.jfranklin.2020.03.011
    [14] H. W. Ren, Z. P. Peng, Y. Gu, Fixed-time synchronization of stochastic memristor-based neural networks with adaptive control, Neural Networks, 130 (2020), 165–175. http://dx.doi.org/10.1016/j.neunet.2020.07.002 doi: 10.1016/j.neunet.2020.07.002
    [15] X. Wang, X. Liu, K. She, S. Zhong, Pinning impulsive synchronization of complex dynamical networks with various time-varying delay sizes, Nonlinear Anal. Hybrid Syst., 26 (2017), 307–318. http://dx.doi.org/10.1016/j.nahs.2017.06.005 doi: 10.1016/j.nahs.2017.06.005
    [16] Y. Bao, Y. Zhang, B. Zhang, Fixed-time synchronization of coupled memristive neural networks via event-triggered control, Appl. Math. Comput., 411 (2021), 126542. http://dx.doi.org/10.1016/j.amc.2021.126542 doi: 10.1016/j.amc.2021.126542
    [17] Y. Zhang, Y. Bao, Event-triggered hybrid impulsive control for synchronization of memristive neural networks, Sci. China Inf. Sci., 63 (2020), 150206. http://dx.doi.org/10.1007/s11432-019-2694-y doi: 10.1007/s11432-019-2694-y
    [18] Y. Fan, X. Huang, H. Shen, J. Cao, Switching event-triggered control for global stabilization of delayed memristive neural networks: An exponential attenuation scheme, Neural Networks, 117 (2019), 216–224. http://dx.doi.org/10.1016/j.neunet.2019.05.014 doi: 10.1016/j.neunet.2019.05.014
    [19] C. Zhang, F. Xie, Synchronization of delayed memristive neural networks by establishing novel Lyapunov functional, Neurocomputing, 369 (2019), 80–91. http://dx.doi.org/10.1016/j.neucom.2019.08.060 doi: 10.1016/j.neucom.2019.08.060
    [20] J. H. Kim, Further improvement of Jensen inequality and application to stability of time-delayed systems, Automatica, 64 (2016), 121–125. http://dx.doi.org/10.1016/j.automatica.2015.08.025 doi: 10.1016/j.automatica.2015.08.025
    [21] A. Seuret, F. Gouaisbaut, Wirtinger-based integral inequality: application to time-delay systems, Automatica, 49 (2013), 2860–2866. http://dx.doi.org/10.1016/j.automatica.2013.05.030 doi: 10.1016/j.automatica.2013.05.030
    [22] M. J. Park, O. M. Kwon, J. H. Park, S. M. Lee, E. J. Cha, Stability of time-delay systems via Wirtinger-based double integral inequality, Automatica, 55 (2015), 204–208. http://dx.doi.org/10.1016/j.automatica.2015.03.010 doi: 10.1016/j.automatica.2015.03.010
    [23] H. J. Yu, Y. He, M. Wu, Improved generalized H_2 filtering for static neural networks with time-varying delay via free-matrix-based integral inequality, Math. Probl. Eng., 2018 (2018), 5147565. http://dx.doi.org/10.1155/2018/5147565 doi: 10.1155/2018/5147565
    [24] P. Park, W. I. Lee, S. Y. Lee, Auxiliary function-based integral/summation inequalities: application to continuous/discrete time-delay systems, Int. J. Control Autom. Syst., 14 (2016), 3–11. http://dx.doi.org/10.1007/s12555-015-2002-y doi: 10.1007/s12555-015-2002-y
    [25] A. Seuret, F. Gouaisbaut, Stability of linear systems with time-varying delays using Bessel-Legendre inequalities, IEEE Trans. Autom. Control, 63 (2018), 225–232. http://dx.doi.org/10.1109/TAC.2017.2730485 doi: 10.1109/TAC.2017.2730485
    [26] X.-M. Zhang, Q.-L. Long, Z. Zeng, Hierarchical type stability criteria for delayed neural networks via canonical Bessel-Legendre inequalities, IEEE Trans. Cybern. Control, 48 (2018), 1660–1671. http://dx.doi.org/10.1109/TCYB.2017.2776283 doi: 10.1109/TCYB.2017.2776283
    [27] H. Ren, J. Xiong, R. Lu, Y. Wu, Synchronization analysis of network systems applying sampled-data controller with time-delay via the Bessel-Legendre inequality, Neurocomputing, 331 (2019), 346–355. http://dx.doi.org/10.1016/j.neucom.2018.11.061 doi: 10.1016/j.neucom.2018.11.061
    [28] C.-K. Zhang, Y. He, L. Jiang, M. Wu, Notes on stability of time-delay systems: bounding inequalities and augmented Lyapunov-Krasovskii functionals, IEEE Trans. Autom. Control, 62 (2017), 5331–5336. http://dx.doi.org/10.1109/TAC.2016.2635381 doi: 10.1109/TAC.2016.2635381
    [29] L. Chua, Memristor-the missing circuit element, IEEE Trans. Circuit Theory, 18 (1971), 507–519. http://dx.doi.org/10.1109/TCT.1971.1083337 doi: 10.1109/TCT.1971.1083337
    [30] A. F. Filippov, Differential equations with discontinuous righthand side, Dordrecht: Springer, 1988. http://dx.doi.org/10.1007/978-94-015-7793-9
    [31] Z. Wu, J. H. Park, H. Su, B. Song, J. Chu, Exponential synchronization for complex dynamical networks with sampled-data, J. Franklin Inst., 349 (2012), 2735–2749. http://dx.doi.org/10.1016/j.jfranklin.2012.09.002 doi: 10.1016/j.jfranklin.2012.09.002
    [32] R. Zhang, D. Zeng, J. H. Park, S. Zhong, Y. Yu, Novel discontinuous control for exponential synchronization of memristive recurrent neural networks with heterogeneous time-varying delays, J. Franklin Inst., 355 (2018), 2826–2848. http://dx.doi.org/10.1016/j.jfranklin.2018.01.018 doi: 10.1016/j.jfranklin.2018.01.018
    [33] M. Syed Ali, M. Marudai, Stochastic stability of discrete-time uncertain recurrent neural networks with Markovian jumping and time-varying delays, Math. Comput. Model., 54 (2011), 1979–1988. http://dx.doi.org/10.1016/j.mcm.2011.05.004 doi: 10.1016/j.mcm.2011.05.004
    [34] X.-M. Zhang, Q.-L. Han, A. Seuret, F. Gouaisbaut, An improved reciprocally convex inequality and an augmented Lyapunov-Krasovskii functional for stability of linear systems with time-varying delay, Automatica, 84 (2017), 221–226. http://dx.doi.org/10.1016/j.automatica.2017.04.048 doi: 10.1016/j.automatica.2017.04.048
    [35] Y. Wang, L. Xie, C. E. De Souza, Robust control of a class of uncertain nonlinear system, Syst. Control Lett., 19 (1992), 139–149. http://dx.doi.org/10.1016/0167-6911(92)90097-C doi: 10.1016/0167-6911(92)90097-C
    [36] J. Sun, G. P. Liu, J. Chen, D. Rees, Improved delay-range-dependent stability criteria for linear systems with time-varying delays, Automatica, 46 (2010), 466–470. http://dx.doi.org/10.1016/j.automatica.2009.11.002 doi: 10.1016/j.automatica.2009.11.002
    [37] X. Yang, J. Cao, J. Liang, Exponential synchronization of memristive neural networks with delays: interval matrix method, IEEE Trans. Neural Netw. Learn. Syst., 28 (2017), 1878–1888. http://dx.doi.org/10.1109/tnnls.2016.2561298 doi: 10.1109/tnnls.2016.2561298
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2526) PDF downloads(146) Cited by(0)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog