Because polyaromatic hydrocarbons show high thermal stability, an example of these compounds, phenylnaphthalene, was tested for solar thermal-power applications. Although static thermal tests showed promising results for 1-phenylnaphthalene, loop testing at temperatures to 450 ℃ indicated that the fluid isomerized and degraded at a slow rate. In a loop with a temperature high enough to drive the isomerization, the higher melting point byproducts tended to condense onto cooler surfaces. This would indicate that the internal channels of cooler components of trough solar electric generating systems, such as the waste heat rejection exchanger, may become coated or clogged affecting loop performance. Thus, pure 1-phenylnaphthalene, without addition of stabilizers, does not appear to be a fluid that would have a sufficiently long lifetime (years to decades) to be used in a loop at temperatures significantly greater than the current 400 ℃ maximum for organic fluids. Similar degradation pathways may occur with other organic materials. The performance of a concentrating solar loop using high temperature fluids was modeled based on the National Renewable Laboratory Solar Advisory Model. It was determined that a solar-to-electricity efficiency of up to 30% and a capacity factor of 60% could be achieved using a high efficiency collector and 12 h thermal energy storage when run at a field outlet temperature of 550 ℃.
1.
Introduction
Problems of artificial intelligence (AI) can involve complex data or tasks; consequently neural networks (NNs) as in [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36] can be beneficial to overcome the design AI functions manually. Knowledge of NNs has been applied in various fields, including biology, artificial intelligence, static image processing, associative memory, electrical engineering and signal processing. The connectivity of the neurons is biologically weighted. Weighting reflects positive excitatory connections while a negative value inhibits the connection.
Activation functions will determine the outcome of models of learning and depth accuracy in the calculation of the training model which can make or break a large NN. Activation functions are also important in determining the ability of NNs regarding convergence speed and convergence, or in some cases, the activation may prevent convergence in the first place as reported in [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28]. NNs are used in processing units and learning algorithms. Time-delay is one of the common distinctive actions in the operation of neurons and plays an important role in causing low levels of efficiency and stability, and may lead to dynamic behavior involving chaos, uncertainty and differences as in [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25]. Therefore, NNs with time delay have received considerable attention in many fields, as in [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25].
It is well known that many real processes often depend on delays whereby the current state of affairs depends on previous states. Delays often occur in many control systems, for example, aircraft control systems, biological modeling, chemicals or electrical networks. Time-delay is often the main source of ambivalence and poor performance of a system.
There are two different kinds of time-delay system stability: delay dependent and delay
independent. Delayed dependent conditions are often less conservative than independent delays,
especially when the delay times are relatively small. The delayed security conditions depend mainly
on the highest estimate and the extent of the delay allowed. The delay-dependent stability for interval
time-varying delay has been broadly studied and adapted in various research fields
in [3,13,14,15,16,19,22,23,24,28]. Time-delay that varies the interval for which the scope is limited is called
interval time-varying delay. Some researchers have reported on NN problems with interval
time-varying delay as in [1,2,3,4,5,7,11,12,13,14,15,21,25], while [16] reported on NN stability with additive
time-varying delay.
There are two types of stability over a finite time interval, namely finite-time stability and
fixed-time stability. With finite-time stability, the system converges in a certain period for any default,
while with fixed-time stability, the convergence time is the same for all defaults within the domain.
Both finite-time stability and fixed-time stability have been extensively adapted in many fields such
as [26,29,30,31,32,33,34,35,37,38]. In [34], J. Puangmalai and et. al. investigated Finite-time stability criteria of
linear system with non-differentiable time-varying delay via new integral inequality based on a
free-matrix for bounding the integral $ \int_{a}^{b}\dot{z}^{T}(s)M\dot{z}(s)ds $ and obtained the new sufficient conditions for the system in the forms of inequalities and linear matrix inequalities. The finite-time stability criteria of neutral-type neural networks with hybrid time-varying delays was studied by using the definition of finite-time stability, Lyapunov function method and the bounded of inequality techniques, see in [37]. Similarly, in [38], M. Zheng and et. al. studied the finite-time stability and synchronization problems of memristor-based fractional-order fuzzy cellular neural network. By applying the existence and uniqueness of the Filippov solution of the network combined with the Banach fixed point theorem, the definition of finite-time stability of the network and Gronwall-Bellman inequality and designing a simple linear feedback controller.
Stability analysis in the context of time-delay systems usually applies the appropriate Lyapunov-Krasovskii functional (LKF) technique in [1,2,3,4,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,34,36,39], estimating the upper bounds of its derivative according to the trajectories of the system. Triple and fourth integrals may be useful in the LKF to solve the solution as in [1,2,5,8,10,11,12,13,16,18,19,23,25,36,39]. Many techniques have been applied to approximate the upper bounds of the LKF derivative, such as Jensen inequality [1,2,5,6,8,11,18,19,24,25,28,34,36,39], Wirtinger-based integral inequality [4,10], tighter inequality lemma [20], delay-dependent stability [3,13,14,15,16,19,22,23,24,28], delay partitioning method [9,15,27], free-weighting matrix variables method [1,10,15,17,18,23,26,34], positive diagonal matrix [2,5,6,8,10,11,12,13,16,17,19,25,27,28] and linear matrix inequality (LMI) techniques [1,3,8,9,11,12,13,15,21,23,24,26,28,39] and other techniques [9,13,14,16,18,36]. In [4], H. B. Zeng investigated stability and dissipativity analysis for static neural networks (NNs) with interval time-varying delay via a new augmented LKF by applying Wirtinger-based inequality. In [6], Z.-M. Gao and et. al proposed the stability problem for the neural networks with time-varying delay via new LKF where the time delay needs to be differentiable.
Based on the above, the topic of finite-time exponential stability criteria of NNs was investigated using non-differentiable time-variation. As a first effort, this article addresses the issue and it main contributions are:
-We introduce a new argument of LKF $ V_{1}(t, x_{t}) = x^{T}(t)P_{1}x(t)+2x^{T}(t)P_{2} \int_{t-h_{2}}^{t}x(s)ds$+$\left(\int_{t-h_{2}}^{t}x(s)ds \right)^{T}P_{3} \int_{t-h_{2}}^{t}x(s)ds$+$2x^{T}(t)P_{4} \int_{-h_{2}}^{0} \int_{t+s}^{t}x(\delta)d\delta ds$+$2\left(\int_{t-h_{2}}^{t}x(s)ds \right)^{T} P_{5} \int_{-h_{2}}^{0} \int_{t+s}^{t}x(\delta)d\delta ds$+$ \left(\int_{-h_{2}}^{0} \int_{t+s}^{t}x(\delta)d\delta ds\right)^{T} P_{6} \int_{-h_{2}}^{0} \int_{t+s}^{t}x(\delta)d\delta ds $ to analyze the problem of finite-time stability criteria of NNs. The augmented Lyapunov matrices $ P_{i}, \ \ i = 1, 2, 3, 4, 5, 6 $ do not to be positive definiteness.
-To apply to finite-time stability problems of NNs, the time-varying delay is non-differentiable which is different from the time-delay cases in [1,2,3,4,5,6,7,15,20].
-To illustrate the effectiveness of this research as being much less conservative than the finite-time stability criteria in [1,2,3,4,5,6,7,15,20] as shown in numerical examples.
To improve the new LKF with its triple integral, consisting of utilizing Jensen’s and a new inequality from [34] and the corollary from [39], an action neural function and positive diagonal matrix, without free-weighting matrix variables and with finite-time stability. Some novel sufficient conditions are obtained for the finite-time stability of NNs with time-varying delays in terms of linear matrix inequalities (LMIs). Finally, numerical examples are provided to show the benefit of using the new LKF approach. To the best of our knowledge, to date, there have been no publications involving the problem finite-time exponential stability of NNs.
The rest of the paper is arranged as follows. Section 2 supplies the considered network and suggests some definitions, propositions and lemmas. Section 3 presents the finite-time exponential stability of NNs with time-varying delay via the new LKF method. Two numerical examples with theoretical results and conclusions are provided in Sections 4 and 5, respectively.
2.
Problem formulation
This paper will use the notations as follows: $ \mathbb{R} $ stands for the sets of real numbers; $ \mathbb{R}^n $ means the $ n- $dimensional space; $ \mathbb{R}^{m \times n} $ is the set of all $ m \times n $ real matrix; $ A^T $ and $ A^{-1} $ signify the transpose and the inverse of matrices $ A $, respectively; $ A $ is symmetric if $ A = A^T $; If $ A $ and $ B $ are symmetric matrices, $ A > B $ means that $ A-B $ is positive definite matrix; $ I $ means the properly dimensioned identity matrix. The symmetric term in the matrix is determined by $ * $; and $ sym\{A\} = A+A^{T} $; Block of diagonal matrix is defined by diag{...}.
Let us consider the following neural network with time-varying delays:
where $ x(t) = [x_{1}(t), x_{2}(t), ..., x_{n}(t)]^{T} $ denotes the state vector with the $ n $ neurons; $ A = diag \{a_{1}, a_{2}, ..., a_{n} \} > 0 $ is a diagonal matrix; $ B $ and $ C $ are the known real constant matrices with appropriate dimensions; $ f(W(.)) = [f_{1}(W_{1}x(.)), f_{2}(W_{2}x(.)), ..., f_{n}(W_{n}x(.))] $ and $ g(W(.)) = [g_{1}(W_{1}x(.)), g_{2}(W_{2}x(.)), ..., g_{n}(W_{n}x(.))] $ denote the neural activation functions; $ W = [W_{1}^{T}, W_{2}^{T}, ..., W_{n}^{T}] $ is delayed connection weight matrix; $ \phi (t) \in C[[-h_2, 0], \mathbb{R}^n] $ is the initial function. The time-varying delay function $ h(t) $ satisfies the following conditions:
where $ h_{1}, h_{2} $ are the known real constant scalars.
The neuron activation functions satisfy the following condition:
Assumption 1. The neuron activation function $ f(\cdot) $ is continuous and bounded which satisfies:
when $ \theta_2 = 0, $ Eq (2.3) can be rewritten as the following condition:
where $ f(0) = 0 $ and $ k_{i}^{-}, k_{i}^{+} $ are given constants.
From (2.3) and (2.4), for $ i = 1, 2, ..., n, $ it follows that
Based on Assumption 1, there exists an equilibrium point $ x^{*} = [x^{*}_{1}(t), x^{*}_{2}(t), ..., x^{*}_{n}(t)]^{T} $ of neural network (2.1).
To prove the main results, the following Definition, Proposition, Corollary and Lemmas are useful.
Definition 1. [34] Given a positive matrix $ M $ and positive constants $ k_1, k_2, T_{f} $ with $ k_1 < k_2, $ the time-delay system described by (2.1) and delay condition as in (2.2) is said to be finite-time stable regarding to $ (k_1, k_2, T_{f}, h_1, h_2, M), $ if the state variables satisfy the following relationship:
Proposition 2. [34] For any positive definite matrix $ Q $, any differential function $ z:[bd_{L}, bd_{U}]\rightarrow \mathbb{R}^{n}. $ Then, the following inequality holds:
where $ \bar{\zeta}^T = \big[z(bd_U) \ \ z(bd_L) \ \ \frac{1}{bd_{UL}}\int_{bd_L}^{bd_U}z(s)ds \big] $ and $ bd_{UL} = bd_{U}-bd_{L}. $
Lemma 3. [40] (Schur complement) Given constant symmetric matrices $ X, Y, Z $ satisfying $ X = X^{T} $ and $ Y = Y^{T} > 0, $ then $ X+Z^{T}Y^{-1}Z < 0 $ if and only if
Corollary 4. [39] For a given symmetric matrix $ Q > 0, $ any vector $ \nu _{0} $ and matrices $ J_1, J_2, J_3, J_4 $ with proper dimensions and any continuously differentiable function $ z:[bd_{L}, bd_{U}]\rightarrow \mathbb{R}^{n} $, the following inequality holds:
where $ bd_{UL} = bd_{U}-bd_{L}, $
$ \gamma_1 = z(bd_U)-\frac{1}{bd_{UL}}\int_{bd_L}^{bd_U}z(s)ds, \ \ \gamma_2 = z(bd_U)+\frac{2}{bd_{UL}}\int_{bd_L}^{bd_U}z(s)ds -\frac{6}{(bd_{UL})^2}\int_{bd_L}^{bd_U}\int_{\delta}^{bd_U}z(s)ds d\delta, \\ \gamma_3 = \frac{1}{bd_{UL}}\int_{bd_L}^{bd_U}z(s)ds- z(bd_L), \ \ \gamma_4 = z(bd_L)-\frac{4}{bd_{UL}}\int_{bd_L}^{bd_U}z(s)ds+\frac{6}{(bd_{UL})^2}\int_{bd_L}^{bd_U}\int_{\delta}^{bd_U}z(s)ds d\delta. $
Lemma 5. [39] For any matrix $ Q > 0 $ and differentiable function $ z:[bd_{L}, bd_U]\rightarrow \mathbb{R}^{n} $, such that the integrals are determined as follows:
where $ \kappa_1 = z(bd_U)-z(bd_L), \ \ \kappa_2 = z(bd_U)+z(bd_L)-\frac{2}{bd_{UL}}\int_{bd_L}^{bd_U}z(s)ds $,
$ \kappa_3 = z(bd_U)-z(bd_L)+\frac{6}{bd_{UL}}\int_{bd_L}^{bd_U}z(s)ds -\frac{12}{(bd_{UL})^2}\int_{bd_L}^{bd_U}\int_{\delta}^{bd_U}z(s)ds d\delta $ and $ bd_{UL} = bd_{U}-bd_{L}. $
Lemma 6. [41] For any positive definite symmetric constant matrix $ Q $ and scalar $ \tau > 0 $, such that the following integrals are determined, it has
3.
Main results
Let $ h_1, h_2 $ and $ \alpha $ be constants,
$ h_{21} = h_2-h_1, \ \ h_{t1} = h(t)-h_1, \ \ h_{2t} = h_2-h(t), $
$ \mathscr{N}_1 = \frac{1-e^{-\alpha h_1}}{\alpha}, \ \ \mathscr{N}_2 = \frac{1-e^{-\alpha h_2}}{\alpha}, \ \ \mathscr{N}_3 = \frac{1-(1+\alpha h_{1})e^{-\alpha h_1}}{\alpha ^{2}}, \ \ \mathscr{N}_4 = \frac{(1+\alpha h_{1})e^{-\alpha h_1}-(1+\alpha h_{2})e^{-\alpha h_2}}{\alpha ^{2}}, $ $ \ \ \mathscr{N}_5 = \frac{1-(1+\alpha h_{2})e^{-\alpha h_2}}{\alpha ^{2}}, $
$ \ \ \mathscr{N}_6 = \frac{-3+2\alpha h_{2}+4e^{-\alpha h_2}-e^{-2\alpha h_2}}{4\alpha ^{3}}, \ \ \mathscr{N}_7 = \frac{-3+2\alpha h_{1}+4e^{-\alpha h_1}-e^{-2\alpha h_1}}{4\alpha ^{3}}, \ \ \mathscr{N}_8 = \frac{-3-2(2+\alpha h_{1})e^{-\alpha h_1}+e^{-2\alpha h_1}}{4\alpha ^{3}}, $
$ \ \ \mathscr{N}_9 = \frac{4(\alpha h_{21}-1)e^{-\alpha h_1}-(2\alpha h_{21}-1)e^{-2\alpha h_1}+4e^{-\alpha h_2}-e^{-2\alpha h_2}}{4\alpha ^{3}}, \ \ \mathscr{N}_{10} = \frac{4e^{-\alpha h_1}-e^{-2\alpha h_1}-4e^{-\alpha h_2}+(1-2\alpha h_{21})e^{-2\alpha h_2}}{4\alpha ^{3}}, $
$ I = M^{\frac{1}{2}}M^{-\frac{1}{2}} = M^{-\frac{1}{2}}M^{\frac{1}{2}}, \ \ \bar{P}_{i} = M^{-\frac{1}{2}}P_{i}M^{-\frac{1}{2}}, \ \ i = 1, 2, 3, ..., 6, \ \ \bar{Q}_{j} = M^{-\frac{1}{2}}Q_{j}M^{-\frac{1}{2}}, \ \ j = 1, 2, $
$ \bar{R}_{k} = M^{-\frac{1}{2}}R_{k}M^{-\frac{1}{2}}, \ \ k = 1, 2, 3, \ \ \bar{S} = M^{-\frac{1}{2}}SM^{-\frac{1}{2}}, \ \ \bar{T}_{l} = M^{-\frac{1}{2}}T_{l}M^{-\frac{1}{2}}, \ \ l = 1, 2, 3, 4, $
$ \ \ \mathscr{M} = \lambda_{min} \{ \bar{P_{i}}\}, \ \ i = 1, 2, 3, ..., 6, $
$ \mathscr{N} = \lambda_{max}\{ \bar{P_{1}}\}+2\lambda_{max}\{ \bar{P_{2}}\}+\lambda_{max}\{ \bar{P_{3}}\}+2\lambda_{max}\{ \bar{P_{4}}\}+2\lambda_{max}\{ \bar{P_{5}}\}+\lambda_{max}\{ \bar{P_{6}}\} $
$ \ \ +\mathscr{N}_{1}\lambda_{max}\{ \bar{Q_{1}}\}+\mathscr{N}_{2}\lambda_{max}\{ \bar{Q_{2}}\}+h_{1}\mathscr{N}_{3}\lambda_{max}\{ \bar{R_{1}}\}+h_{21}\mathscr{N}_{4}\lambda_{max}\{ \bar{R_{2}}\}+h_{2}\mathscr{N}_{5}\lambda_{max}\{ \bar{R_{3}}\} $
$ \ \ +\mathscr{N}_{6}\lambda_{max}\{ \bar{S}\}+2\lambda_{max}\{ L_{1}\}+2\lambda_{max}\{ L_{2}\}+2\lambda_{max}\{ G_{1}\}+2\lambda_{max}\{ G_{2}\} $
$ \ \ +\mathscr{N}_{7}\lambda_{max}\{ \bar{T_{1}}\}+\mathscr{N}_{8}\lambda_{max}\{ \bar{T_{2}}\}+\mathscr{N}_{9}\lambda_{max}\{ \bar{T_{3}}\}+\mathscr{N}_{10}\lambda_{max}\{ \bar{T_{4}}\}, $
$ L_{1} = \sum_{i = 1}^{n}\lambda_{1i}, \ \ L_{2} = \sum_{i = 1}^{n}\lambda_{2i}, \ \ G_{1} = \sum_{i = 1}^{n}\gamma_{1i}, \ \ G_{2} = \sum_{i = 1}^{n}\gamma_{2i}. $
The notations for some matrices are defined as follows:
$ f(t) = f(Wx(t)) $ and $ g_{h}(t) = g(Wx(t-h(t))), $
$ \mathscr{W}_{1}(t) = \frac{1}{h_1}\int_{t-h_1}^{t}x(s)ds, \ \ \mathscr{W}_{2}(t) = \frac{1}{h_{t1}}\int_{t-h(t)}^{t-h_1}x(s)ds, \ \ \mathscr{W}_{3}(t) = \frac{1}{h_{2t}}\int_{t-h_2}^{t-h(t)}x(s)ds, $
$ \mathscr{W}_{4}(t) = \frac{1}{h_2}\int_{t-h_2}^{t}x(s)ds, \ \ \mathscr{W}_{5}(t) = \frac{1}{h_{2}}\int_{-h_2}^{0}\int_{t+s}^{t}x(\delta)ds d \delta, \ \ \mathscr{W}_{6}(t) = \frac{1}{h_{1}^{2}}\int_{t-h_1}^{t}\int_{\tau}^{t}x(s)ds d \tau, $
$ \mathscr{W}_{7}(t) = \frac{1}{h_{t1}^{2}}\int_{t-h(t)}^{t-h_1}\int_{\tau}^{t-h_1}x(s)ds d \tau, \ \ \mathscr{W}_{8}(t) = \frac{1}{h_{2t}^{2}}\int_{t-h_2}^{t-h(t)}\int_{\tau}^{t-h(t)}x(s)ds d \tau, $
$ \varpi_1(t) = [x^{T}(t) \ \ x^{T}(t-h_1) \ \ x^{T}(t-h(t)) \ \ x^{T}(t-h_2) \ \ f^{T}(t) \ \ g_{h}^{T}(t)]^{T}, $
$ \varpi_2(t) = [\mathscr{W}_{1}^{T}(t) \ \ \mathscr{W}_{2}^{T}(t) \ \ \mathscr{W}_{3}^{T}(t) \ \ \mathscr{W}_{4}^{T}(t) \ \ \mathscr{W}_{5}^{T}(t) \ \ \dot{x}^{T}(t) \ \ \mathscr{W}_{6}^{T}(t) \ \ \mathscr{W}_{7}^{T}(t) \ \ \mathscr{W}_{8}^{T}(t)]^{T}, $
$ \varpi = [\varpi_1^{T}(t) \ \ \varpi_2^{T}(t)]^{T}, $
$ D_{1} = diag \{ k_{11}^{+}, k_{21}^{+}, ..., k_{n1}^{+} \}, D_{2} = diag \{ k_{12}^{+}, k_{22}^{+}, ..., k_{n2}^{+} \} $ and $ D = \max \{D_{1}, D_{2} \}, $
$ E_{1} = diag \{ k_{11}^{-}, k_{21}^{-}, ..., k_{n1}^{-} \}, E_{2} = diag \{ k_{12}^{-}, k_{22}^{-}, ..., k_{n2}^{-} \} $ and $ E = \max \{E_{1}, E_{2} \}, $
$ \zeta_{1}(t) = [x^{T}(t) \ \ x^{T}(t-h_1) \ \ \mathscr{W}^{T}_{1}(t)], \ \ \zeta_{2}(t) = [x^{T}(t-h_1) \ \ x^{T}(t-h(t)) \ \ \mathscr{W}^{T}_{2}(t)], $
$ \zeta_{3}(t) = [x^{T}(t-h(t)) \ \ x^{T}(t-h_2) \ \ \mathscr{W}^{T}_{3}(t)], \ \ \zeta_{4}(t) = [x^{T}(t) \ \ x^{T}(t-h_{2}) \ \ \mathscr{W}^{T}_{4}(t)], $
$ \mathscr{G}_{1} = x(t)-\mathscr{W}_{1}(t), \ \ \mathscr{G}_{2} = x(t)+2\mathscr{W}_{1}(t)-6\mathscr{W}_{6}(t), $
$ \mathscr{G}_{3} = \mathscr{W}_{1}(t)-x(t-h_{1}), \ \ \mathscr{G}_{4} = x(t-h_{1})-4\mathscr{W}_{1}(t)+6\mathscr{W}_{6}(t), $
$ \mathscr{G}_{5} = x(t-h_{1})-\mathscr{W}_{2}(t), \ \ \mathscr{G}_{6} = x(t-h_{1})+2\mathscr{W}_{2}(t)-6\mathscr{W}_{7}(t), $
$ \mathscr{G}_{7} = x(t-h(t))-\mathscr{W}_{3}(t), \ \ \mathscr{G}_{8} = x(t-h(t))+2\mathscr{W}_{3}(t)-6\mathscr{W}_{8}(t), $
$ \mathscr{G}_{9} = \mathscr{W}_{2}(t)-x(t-h(t)), \ \ \mathscr{G}_{10} = x(t-h(t))-4\mathscr{W}_{2}(t)+6\mathscr{W}_{7}(t), $
$ \mathscr{G}_{11} = \mathscr{W}_{3}(t)-x(t-h_{2}), \ \ \mathscr{G}_{12} = x(t-h_{2})-4\mathscr{W}_{3}(t)+6\mathscr{W}_{8}(t). $
Let us consider a LKF for stability criterion for network (2.1) as the following equation:
where
Next, we will show that the LKF (3.1) is positive definite as follows:
Proposition 7. Consider an $ \alpha > 0 $. The LKF (3.1) is positive definite, if there exist matrices $ Q_i > 0, (i = 1, 2), $ $ R_j > 0, (j = 1, 2, 3) $, $ T_k > 0, (k = 1, 2, 3, 4) $, $ S > 0 $ and any matrices $ P_1 = P_1^T $, $ P_3 = P_3^T $, $ P_6 = P_6^T $, $ P_2 $, $ P_4 $, $ P_5 $, such that the following LMI holds:
where
Proof. We let $ z_1(t) = h_{2}\mathscr{W}_{4}(t) $, $ z_2(t) = h_{2}\mathscr{W}_{5}(t) $, then
Combining with $ V_{2}(t, x_t) $, $ V_{4}(t, x_t) $, $ V_{5}(t, x_t) $, $ V_{8}(t, x_t)-V_{10}(t, x_t) $, it follows that if the LMIs (3.2) holds, the LKF (3.1) is positive definite.
Remark 8. It is worth noting that most of previous paper[1,2,3,4,5,6,7,15,20], the Lyapunov martices $ P_1 $, $ P_3 $ and $ P_6 $ must be positive definite. In our work, we remove this restriction by utilizing the technique of constructing complicated Lyapunov $ V_{1}(t, x_t) $, $ V_{3}(t, x_t) $, $ V_{6}(t, x_t) $ and $ V_{7}(t, x_t) $ as shown in the proof of Proposition 7, therefore, $ P_1 $, $ P_3 $ and $ P_6 $ are only real matrices. We can see that our work are less conservative and more applicable than aforementioned works.
Theorem 9. Given a positive matrix $ M > 0 $, the time-delay system described by (2.1) and delay condition as in (2.2) is said finite-time stable with respect to $ (k_1, k_2, T_f, h_1, h_2, M), $ if there exist symmetric positive definite matrices $ Q_i > 0 $, $ (i = 1, 2) $, $ R_j > 0 \ (j = 1, 2, 3) $, $ T_k > 0 \ (k = 1, 2, 3, 4) $, $ K_l > 0 \ (l = 1, 2, 3, ..., 10) $, diagonal matrices $ S > 0, \ \ H_{m} > 0, \ \ m = 1, 2, 3 $, and matrices $ P_1 = P_1^T $, $ P_3 = P_3^T $, $ P_6 = P_6^T $, $ P_2 $, $ P_4 $, $ P_5 $ such that the following LMIs hold:
where $ i = 1, 2, 3, ..., 12, \ \ b_{1} = \frac{1}{6}, \ \ b_{2} = \frac{1}{h_{t1}}, \ \ b_{3} = \frac{1}{h_{2t}}, $
$ \chi_{1} = -2 e^{2 \alpha h_{1}}T_{1} $, $ \ \ \chi_{2} = -4 e^{2 \alpha h_{1}}T_{1} $, $ \ \ \chi_{3} = -2 e^{2 \alpha h_{1}}T_{2} $, $ \ \ \chi_{4} = -4 e^{2 \alpha h_{1}}T_{2} $, $ \ \ \chi_{5} = \chi_{7} = -2 e^{2 \alpha h_{2}}T_{3} $, $ \ \ \chi_{6} = \chi_{8} = -4 e^{2 \alpha h_{2}}T_{3} $, $ \ \ \chi_{9} = \chi_{11} = -2 e^{2 \alpha h_{2}}T_{4} $, $ \ \ \chi_{10} = \chi_{12} = -4 e^{2 \alpha h_{2}}T_{4} $,
and
Proof. Let us choose the LKF defined as in (3.1). By Proposition 7, it is easy to check that
Taking the derivative of $ V_i(t, x_t), i = 1, 2, 3, ..., 10 $ along the solution of the network (2.1), we get
Define
Applying Proposition 2, we obtain
Applying Lemma 6, this leads to
From Corollary 4, we have
By Lemma 5, we obtain
Taking the assumption of activation functions (2.5) and (2.6) for any diagonal matrices $ H_1, H_2, H_3 > 0, $ it follows that
Multiply (2.1) by $ (2Qx(t)+2Q\dot{x}(t))^{T}, $ we have the following identity:
From (3.10)-(3.18), it can be obtained
where $ \Omega _{1} $ and $ \Omega _{2} $ are given in Eqs (3.4) and (3.8). Since $ \Omega _{1} < 0 $ and $ \Omega _{2} < 0 $, $ \dot{V}(t, x_t)+\alpha V(t, x_t)\leq 0 $, then, we have
Integrating both sides of (3.19) from $ 0 $ to $ t $ with $ t \in [0, T_{f}] $, we obtain
with
Let $ I = M^{\frac{1}{2}}M^{-\frac{1}{2}} = M^{-\frac{1}{2}}M^{\frac{1}{2}}, \ \ \bar{P}_{i} = M^{-\frac{1}{2}}P_{i}M^{-\frac{1}{2}}, \ \ i = 1, 2, 3, ..., 6, $
$ \bar{Q}_{j} = M^{-\frac{1}{2}}Q_{j}M^{-\frac{1}{2}}, \ \ j = 1, 2, \ \ \bar{R}_{k} = M^{-\frac{1}{2}}R_{k}M^{-\frac{1}{2}}, \ \ k = 1, 2, 3, \ \ \bar{T}_{l} = M^{-\frac{1}{2}}T_{l}M^{-\frac{1}{2}}, l = 1, 2, 3, 4. $ Therefore,
Since $ V(t, x_{t}) \geq V_{1}(t, x_{t}) $, we have
For any $ t \in [0, T_{f}], $ it follows that,
This shows that the condition (3.9) holds. Therefore, the delayed neural network described by (2.1) and delay condition as in (2.2) is said finite-time stable with respect to $ (k_1, k_2, T_f, h_1, h_2, M). $
Remark 10. The condition (3.9) is not standard form of LMIs. To verify that this condition is equivalent to the relation of LMI, it needs to apply Schur's complement lemma in Lemma 1 and let $ \mathscr{B}_{i}, \ i = 1, 2, 3, ..., 21 $ be some positive scalars with
Let us define the following condition
It follows that condition (3.9) is equivalent to the relations and LMIs as follows:
where $ I \in \mathbb{R}^{n \times n} $ is an identity matrix, $ \psi_{1, 1} = -\mathscr{B}_{1}k_{2}e^{-\alpha T_{f}}, \ \psi_{1, 2} = \mathscr{B}_{2}\sqrt{k_{1}}, \ \psi_{1, 3} = \mathscr{B}_{3}\sqrt{2k_{1}}, \ \psi_{1, 4} = \mathscr{B}_{4}\sqrt{k_{1}}, \ \psi_{1, 5} = \mathscr{B}_{5}\sqrt{2k_{1}}, \ \psi_{1, 6} = \mathscr{B}_{6}\sqrt{2k_{1}}, \ \psi_{1, 7} = \mathscr{B}_{7}\sqrt{k_{1}}, \ \psi_{1, 8} = \mathscr{B}_{8}\sqrt{k_{1}\mathscr{N}_{1}}, \ \psi_{1, 9} = \mathscr{B}_{9}\sqrt{k_{1}\mathscr{N}_{2}}, \ \psi_{1, 10} = \mathscr{B}_{10}\sqrt{k_{1}h_{1}\mathscr{N}_{3}}, \ \psi_{1, 11} = \mathscr{B}_{11}\sqrt{k_{1}h_{21}\mathscr{N}_{4}}, \ \psi_{1, 12} = \mathscr{B}_{12}\sqrt{k_{1}h_{2}\mathscr{N}_{5}}, \ \psi_{1, 13} = \mathscr{B}_{13}\sqrt{k_{1}\mathscr{N}_{6}}, \ \psi_{1, 14} = \mathscr{B}_{14}\sqrt{2k_{1}}, \ \psi_{1, 15} = \mathscr{B}_{15}\sqrt{2k_{1}}, \ \psi_{1, 16} = \mathscr{B}_{16}\sqrt{2k_{1}}, \ \psi_{1, 17} = \mathscr{B}_{17}\sqrt{2k_{1}}, \ \psi_{1, 18} = \mathscr{B}_{18}\sqrt{k_{1}\mathscr{N}_{7}}, \ \psi_{1, 19} = \mathscr{B}_{19}\sqrt{k_{1}\mathscr{N}_{8}}, \ \psi_{1, 20} = \mathscr{B}_{20}\sqrt{k_{1}\mathscr{N}_{9}}, \ \psi_{1, 21} = \mathscr{B}_{21}\sqrt{k_{1}\mathscr{N}_{10}}. $
Corollary 11. Given a positive matrix $ M > 0 $, the time-delay system described by (2.1) and delay condition as in (2.2) is said finite-time stable with respect to $ (k_1, k_2, T_f, h_1, h_2, M), $ if there exist symmetric positive definite matrices $ Q_i > 0 $, $ (i = 1, 2) $, $ R_j > 0 \ (j = 1, 2, 3) $, $ T_k > 0 \ (k = 1, 2, 3, 4) $, $ K_l > 0 \ (l = 1, 2, 3, ..., 10) $, diagonal matrices $ S > 0, \ \ H_{m} > 0, \ \ m = 1, 2, 3 $, and matrices $ P_1 = P_1^T $, $ P_3 = P_3^T $, $ P_6 = P_6^T $, $ P_2 $, $ P_4 $, $ P_5 $ and positive scalars $ \alpha, \ \ \mathscr{B}_{i}, \ \ 1, 2, 3, ..., 21 $ such that LMIs and inequalities (3.3)-(3.8), (3.20)-(3.26).
Remark 12. If the delayed NNs as in (2.1) are choosing as $ B = W_{0}, C = W_{1}, W = W_{2}, $ then the system turns into the delayed NNs proposed in [23],
where $ 0 \leq h(t) \leq h_M $ and $ \dot{h} (t) \leq h_D, $ it follows that (3.28) is the special case of the delayed NNs in (2.1).
Remark 13. Replacing $ W_{0} = B, W_{1} = C, W_{2} = W, d_{1}(t) = d(t) = h(t) $ and $ d_{2}(t) = 0 $ and external constant input is equal to zero in Eq (1) of the delayed NNs as had been done in [16], we have
then (3.28) is the same NNs as in (2.1) that (2.1) is the particular case of the delayed NNs in [16].
Remark 14. If we choose $ B = 0, C = 1 $ and $ g = f $ and constant input is equal to zero in the delayed NNs in (2.1), then it can be rewritten as
then (3.29) is the special case of the NNs as in (2.1) which has been done in [2,3,4,5,6,10,12,13,20].
Remark 15. If we set $ B = W_0, C = W_1 $ and $ W = 1 $ and constant input is equal to zero in the delayed NNs in (2.1), then (2.1) turns into
then (3.30) is the special case of the NNs as in (2.1) which has been done in [8,11,24,28]. Similarly, if we rearrange the matrices in the delayed NNs in (2.1) and set $ W = 1 $, it shows that it is the same delayed NNs proposed in [9,19,22].
Remark 16. The time delay in this work is defined as a continuous function serving on to a given interval that the lower and upper bounds for the time-varying delay exist and the time delay function is not necessary to be differentiable. In some proposed researches, the time delay function needs to be differentiable which are reported in [2,3,4,5,6,8,9,10,11,12,13,15,16,17,19,20,22,23,24,28].
4.
Numerical solutions
In this section, we provide numerical examples with their simulations to demonstrate the effectiveness of our results.
Example 17. Consider the neural networks (2.1) with parameters as follows:
The activation function satisfies Eq (2.3) with
By applying Matlab LMIs Toolbox to solve the LMIs in (3.4)-(3.8), we can conclude that the upper bound of $ h_{max} $ without nondifferentiable $ \mu $ of NNs in (2.1) which is shown in Table 1 is to compare the results of this paper with the proposed results in [1,2,3,4,5,6,7,15,20]. The upper bounds received in this work are larger than the corresponding ones. Note that the symbol ‘-’ represents the upper bounds which are not provided in those literatures and this paper.
The numerical simulation of finite-time stability for delayed neural network (2.1) with time-varying delay $ h(t) = 0.6+0.5\vert sint \vert, $ the initial condition $ \phi (t) = [-0.8, -0.3, 0.8], $ we have $ x^{T}(0)Mx(0) = 1.37, $ where $ M = I $ then we choose $ k_{1} = 1.4 $ and activation function $ g(x(t)) = tanh(x(t)). $ The trajectories of $ x_1(t), x_2(t) $ and $ x_3(t) $ of finite-time stability for this network is shown in Figure 1. Otherwise, Figure 2 shows the trajectories of $ x^{T}(t)x(t) $ of finite-time stability for delayed neural network (2.1) with $ k_{2} = 1.575. $
Example 18. Consider the neural networks (2.1) with parameters as follows:
The activation function satisfies Eq (2.3) with
As shown in Table 2, the results of the obtained as in [2,3,5,6,20] and this work, by using Matlab LMIs Toolbox, we can summarize that the upper bound of $ h_{max} $ is differentiable $ \mu $ of NNs in (2.1). We can see that the upper bounds received in this paper are larger than the corresponding purposed. Similarly, the symbol ‘-’ represents the upper bounds which are not given in those proposed and this study.
The numerical simulation of finite-time stability for delayed neural network (2.1) with time-varying delay $ h(t) = 0.6+0.5\vert sint \vert, $ the initial condition $ \phi (t) = [-0.4, 0.5], $ we have $ x^{T}(0)Mx(0) = 0.41, $ where $ M = I $ then we choose $ k_{1} = 0.5 $ and activation function $ g(x(t)) = tanh(x(t)). $ The trajectories of $ x_1(t) $ and $ x_2(t) $ of finite-time stability for this network is shown in Figure 3. Otherwise, Figure 4 shows the trajectories of $ x^{T}(t)x(t) $ of finite-time stability for delayed neural network (2.1) with $ k_{2} = 0.85. $
Example 19. Consider the neural networks (2.1) with parameters as follows:
and the activation function $ f(x(t)) = g(x(t)) = tanh(x(t)) $, the time-varying delay function satisfying $ h(t) = 0.6+0.5\vert sint \vert. $ With an initial condition $ \phi (t) = [0.4, 0.2, 0.4], $ the solution of the neural networks is shown in Figure 5. We can see that the trajectory of $ x^T(t)Mx(t) = \| x(t)\|^2 $ diverges as $ t\rightarrow \infty $ is shown in Figure 6. To further investigate the maximum value of $ T_f $ that the finite-time stability of the neural networks (2.1) with respect to $ (0.6, k_2, T_f, 0.6, 1.1, I) $. For fixed $ k_2 = 500 $, by solving the LMIs in Theorem 0 and Corollary 11, we have the maximum value of $ T_f = 8.395 $.
5.
Conclusions
In this research, the finite-time stability criterion for neural networks with time-varying delays were proposed via a new argument based on the Lyapunov-Krasovskii functional (LKF) method was proposed with non-differentiable time-varying delay. The new LKF was improved by including triple integral terms consisting of improved functionality of finite-time stability, including integral inequality and implementing a positive diagonal matrix without a free weighting matrix. The improved finite-time sufficient conditions for the neural network with time varying delay were estimated in terms of linear matrix inequalities (LMIs) and the results were better than reported in previous research.
Acknowledgments
The first author was supported by Faculty of Science and Engineering, and Research and Academic Service Division, Kasetsart University, Chalermprakiat Sakon Nakhon province campus. The second author was financially supported by the Thailand Research Fund (TRF), the Office of the Higher Education Commission (OHEC) (grant number : MRG6280149) and Khon Kaen University.
Conflict of interest
The authors declare that there is no conflict of interests regarding the publication of this paper.