Research article

How synergistic or antagonistic effects may influence the mutual hazard ranking of chemicals

  • The presence of various agents, including humic materials, nanomaterials, microplastics, or simply specific chemical compounds, may cause changes in the apparent persistence, bioaccumulation, and/or toxicity (PBT) of a chemical compound leading to an either increased or decreased PBT characteristics and thus an increased or decreased hazard evaluation. In the present paper, a series chloro-containing obsolete pesticides is studied as an illustrative example. Partial order methodology is used to quantify how changed P, B, or T characteristics of methoxychlor (MEC) influences the measure of the hazard of MEC, relative to the other 11 compounds in the series investigated. Not surprisingly, an increase in one of the three indicators (P, B, or T) lead to an increased average order and thus an increased relative hazard as a result of a synergistic effect. A decrease in one of the indicator values analogously causes a decreased average order/relative hazard through an antagonistic effect; the effect, however, being less pronounced. It is further seen that the effect of changing the apparent value of the three indicators is different. Thus, persistence apparently is more important that bioaccumulation which again appears more important than toxicity, which is in agreement with previous work. The results are discussed with reference to the European chemicals framework on registration, evaluation and authorization of chemicals (REACH) framework.

    Citation: Lars Carlsen. How synergistic or antagonistic effects may influence the mutual hazard ranking of chemicals[J]. AIMS Environmental Science, 2015, 2(2): 241-252. doi: 10.3934/environsci.2015.2.241

    Related Papers:

    [1] Chantapish Zamart, Thongchai Botmart, Wajaree Weera, Prem Junsawang . Finite-time decentralized event-triggered feedback control for generalized neural networks with mixed interval time-varying delays and cyber-attacks. AIMS Mathematics, 2023, 8(9): 22274-22300. doi: 10.3934/math.20231136
    [2] Huahai Qiu, Li Wan, Zhigang Zhou, Qunjiao Zhang, Qinghua Zhou . Global exponential periodicity of nonlinear neural networks with multiple time-varying delays. AIMS Mathematics, 2023, 8(5): 12472-12485. doi: 10.3934/math.2023626
    [3] Nayika Samorn, Kanit Mukdasai, Issaraporn Khonchaiyaphum . Analysis of finite-time stability in genetic regulatory networks with interval time-varying delays and leakage delay effects. AIMS Mathematics, 2024, 9(9): 25028-25048. doi: 10.3934/math.20241220
    [4] Sunisa Luemsai, Thongchai Botmart, Wajaree Weera, Suphachai Charoensin . Improved results on mixed passive and $ H_{\infty} $ performance for uncertain neural networks with mixed interval time-varying delays via feedback control. AIMS Mathematics, 2021, 6(3): 2653-2679. doi: 10.3934/math.2021161
    [5] R. Sriraman, P. Vignesh, V. C. Amritha, G. Rachakit, Prasanalakshmi Balaji . Direct quaternion method-based stability criteria for quaternion-valued Takagi-Sugeno fuzzy BAM delayed neural networks using quaternion-valued Wirtinger-based integral inequality. AIMS Mathematics, 2023, 8(5): 10486-10512. doi: 10.3934/math.2023532
    [6] Li Wan, Qinghua Zhou, Hongbo Fu, Qunjiao Zhang . Exponential stability of Hopfield neural networks of neutral type with multiple time-varying delays. AIMS Mathematics, 2021, 6(8): 8030-8043. doi: 10.3934/math.2021466
    [7] Zhigang Zhou, Li Wan, Qunjiao Zhang, Hongbo Fu, Huizhen Li, Qinghua Zhou . Exponential stability of periodic solution for stochastic neural networks involving multiple time-varying delays. AIMS Mathematics, 2024, 9(6): 14932-14948. doi: 10.3934/math.2024723
    [8] Qinghua Zhou, Li Wan, Hongbo Fu, Qunjiao Zhang . Exponential stability of stochastic Hopfield neural network with mixed multiple delays. AIMS Mathematics, 2021, 6(4): 4142-4155. doi: 10.3934/math.2021245
    [9] Jenjira Thipcha, Presarin Tangsiridamrong, Thongchai Botmart, Boonyachat Meesuptong, M. Syed Ali, Pantiwa Srisilp, Kanit Mukdasai . Robust stability and passivity analysis for discrete-time neural networks with mixed time-varying delays via a new summation inequality. AIMS Mathematics, 2023, 8(2): 4973-5006. doi: 10.3934/math.2023249
    [10] Biwen Li, Yibo Sun . Stability analysis of Cohen-Grossberg neural networks with time-varying delay by flexible terminal interpolation method. AIMS Mathematics, 2023, 8(8): 17744-17764. doi: 10.3934/math.2023906
  • The presence of various agents, including humic materials, nanomaterials, microplastics, or simply specific chemical compounds, may cause changes in the apparent persistence, bioaccumulation, and/or toxicity (PBT) of a chemical compound leading to an either increased or decreased PBT characteristics and thus an increased or decreased hazard evaluation. In the present paper, a series chloro-containing obsolete pesticides is studied as an illustrative example. Partial order methodology is used to quantify how changed P, B, or T characteristics of methoxychlor (MEC) influences the measure of the hazard of MEC, relative to the other 11 compounds in the series investigated. Not surprisingly, an increase in one of the three indicators (P, B, or T) lead to an increased average order and thus an increased relative hazard as a result of a synergistic effect. A decrease in one of the indicator values analogously causes a decreased average order/relative hazard through an antagonistic effect; the effect, however, being less pronounced. It is further seen that the effect of changing the apparent value of the three indicators is different. Thus, persistence apparently is more important that bioaccumulation which again appears more important than toxicity, which is in agreement with previous work. The results are discussed with reference to the European chemicals framework on registration, evaluation and authorization of chemicals (REACH) framework.


    Problems of artificial intelligence (AI) can involve complex data or tasks; consequently neural networks (NNs) as in [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36] can be beneficial to overcome the design AI functions manually. Knowledge of NNs has been applied in various fields, including biology, artificial intelligence, static image processing, associative memory, electrical engineering and signal processing. The connectivity of the neurons is biologically weighted. Weighting reflects positive excitatory connections while a negative value inhibits the connection.

    Activation functions will determine the outcome of models of learning and depth accuracy in the calculation of the training model which can make or break a large NN. Activation functions are also important in determining the ability of NNs regarding convergence speed and convergence, or in some cases, the activation may prevent convergence in the first place as reported in [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28]. NNs are used in processing units and learning algorithms. Time-delay is one of the common distinctive actions in the operation of neurons and plays an important role in causing low levels of efficiency and stability, and may lead to dynamic behavior involving chaos, uncertainty and differences as in [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25]. Therefore, NNs with time delay have received considerable attention in many fields, as in [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25].

    It is well known that many real processes often depend on delays whereby the current state of affairs depends on previous states. Delays often occur in many control systems, for example, aircraft control systems, biological modeling, chemicals or electrical networks. Time-delay is often the main source of ambivalence and poor performance of a system.

    There are two different kinds of time-delay system stability: delay dependent and delay independent. Delayed dependent conditions are often less conservative than independent delays, especially when the delay times are relatively small. The delayed security conditions depend mainly on the highest estimate and the extent of the delay allowed. The delay-dependent stability for interval time-varying delay has been broadly studied and adapted in various research fields in [3,13,14,15,16,19,22,23,24,28]. Time-delay that varies the interval for which the scope is limited is called interval time-varying delay. Some researchers have reported on NN problems with interval time-varying delay as in [1,2,3,4,5,7,11,12,13,14,15,21,25], while [16] reported on NN stability with additive time-varying delay.

    There are two types of stability over a finite time interval, namely finite-time stability and fixed-time stability. With finite-time stability, the system converges in a certain period for any default, while with fixed-time stability, the convergence time is the same for all defaults within the domain. Both finite-time stability and fixed-time stability have been extensively adapted in many fields such as [26,29,30,31,32,33,34,35,37,38]. In [34], J. Puangmalai and et. al. investigated Finite-time stability criteria of linear system with non-differentiable time-varying delay via new integral inequality based on a free-matrix for bounding the integral $ \int_{a}^{b}\dot{z}^{T}(s)M\dot{z}(s)ds $ and obtained the new sufficient conditions for the system in the forms of inequalities and linear matrix inequalities. The finite-time stability criteria of neutral-type neural networks with hybrid time-varying delays was studied by using the definition of finite-time stability, Lyapunov function method and the bounded of inequality techniques, see in [37]. Similarly, in [38], M. Zheng and et. al. studied the finite-time stability and synchronization problems of memristor-based fractional-order fuzzy cellular neural network. By applying the existence and uniqueness of the Filippov solution of the network combined with the Banach fixed point theorem, the definition of finite-time stability of the network and Gronwall-Bellman inequality and designing a simple linear feedback controller.

    Stability analysis in the context of time-delay systems usually applies the appropriate Lyapunov-Krasovskii functional (LKF) technique in [1,2,3,4,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,34,36,39], estimating the upper bounds of its derivative according to the trajectories of the system. Triple and fourth integrals may be useful in the LKF to solve the solution as in [1,2,5,8,10,11,12,13,16,18,19,23,25,36,39]. Many techniques have been applied to approximate the upper bounds of the LKF derivative, such as Jensen inequality [1,2,5,6,8,11,18,19,24,25,28,34,36,39], Wirtinger-based integral inequality [4,10], tighter inequality lemma [20], delay-dependent stability [3,13,14,15,16,19,22,23,24,28], delay partitioning method [9,15,27], free-weighting matrix variables method [1,10,15,17,18,23,26,34], positive diagonal matrix [2,5,6,8,10,11,12,13,16,17,19,25,27,28] and linear matrix inequality (LMI) techniques [1,3,8,9,11,12,13,15,21,23,24,26,28,39] and other techniques [9,13,14,16,18,36]. In [4], H. B. Zeng investigated stability and dissipativity analysis for static neural networks (NNs) with interval time-varying delay via a new augmented LKF by applying Wirtinger-based inequality. In [6], Z.-M. Gao and et. al proposed the stability problem for the neural networks with time-varying delay via new LKF where the time delay needs to be differentiable.

    Based on the above, the topic of finite-time exponential stability criteria of NNs was investigated using non-differentiable time-variation. As a first effort, this article addresses the issue and it main contributions are:

    -We introduce a new argument of LKF $ V_{1}(t, x_{t}) = x^{T}(t)P_{1}x(t)+2x^{T}(t)P_{2} \int_{t-h_{2}}^{t}x(s)ds$+$\left(\int_{t-h_{2}}^{t}x(s)ds \right)^{T}P_{3} \int_{t-h_{2}}^{t}x(s)ds$+$2x^{T}(t)P_{4} \int_{-h_{2}}^{0} \int_{t+s}^{t}x(\delta)d\delta ds$+$2\left(\int_{t-h_{2}}^{t}x(s)ds \right)^{T} P_{5} \int_{-h_{2}}^{0} \int_{t+s}^{t}x(\delta)d\delta ds$+$ \left(\int_{-h_{2}}^{0} \int_{t+s}^{t}x(\delta)d\delta ds\right)^{T} P_{6} \int_{-h_{2}}^{0} \int_{t+s}^{t}x(\delta)d\delta ds $ to analyze the problem of finite-time stability criteria of NNs. The augmented Lyapunov matrices $ P_{i}, \ \ i = 1, 2, 3, 4, 5, 6 $ do not to be positive definiteness.

    -To apply to finite-time stability problems of NNs, the time-varying delay is non-differentiable which is different from the time-delay cases in [1,2,3,4,5,6,7,15,20].

    -To illustrate the effectiveness of this research as being much less conservative than the finite-time stability criteria in [1,2,3,4,5,6,7,15,20] as shown in numerical examples.

    To improve the new LKF with its triple integral, consisting of utilizing Jensen’s and a new inequality from [34] and the corollary from [39], an action neural function and positive diagonal matrix, without free-weighting matrix variables and with finite-time stability. Some novel sufficient conditions are obtained for the finite-time stability of NNs with time-varying delays in terms of linear matrix inequalities (LMIs). Finally, numerical examples are provided to show the benefit of using the new LKF approach. To the best of our knowledge, to date, there have been no publications involving the problem finite-time exponential stability of NNs.

    The rest of the paper is arranged as follows. Section 2 supplies the considered network and suggests some definitions, propositions and lemmas. Section 3 presents the finite-time exponential stability of NNs with time-varying delay via the new LKF method. Two numerical examples with theoretical results and conclusions are provided in Sections 4 and 5, respectively.

    This paper will use the notations as follows: $ \mathbb{R} $ stands for the sets of real numbers; $ \mathbb{R}^n $ means the $ n- $dimensional space; $ \mathbb{R}^{m \times n} $ is the set of all $ m \times n $ real matrix; $ A^T $ and $ A^{-1} $ signify the transpose and the inverse of matrices $ A $, respectively; $ A $ is symmetric if $ A = A^T $; If $ A $ and $ B $ are symmetric matrices, $ A > B $ means that $ A-B $ is positive definite matrix; $ I $ means the properly dimensioned identity matrix. The symmetric term in the matrix is determined by $ * $; and $ sym\{A\} = A+A^{T} $; Block of diagonal matrix is defined by diag{...}.

    Let us consider the following neural network with time-varying delays:

    $ ˙x(t)=Ax(t)+Bf(Wx(t))+Cg(Wx(th(t))),x(t)=ϕ(t),  t[h2,0],                                    
    $
    (2.1)

    where $ x(t) = [x_{1}(t), x_{2}(t), ..., x_{n}(t)]^{T} $ denotes the state vector with the $ n $ neurons; $ A = diag \{a_{1}, a_{2}, ..., a_{n} \} > 0 $ is a diagonal matrix; $ B $ and $ C $ are the known real constant matrices with appropriate dimensions; $ f(W(.)) = [f_{1}(W_{1}x(.)), f_{2}(W_{2}x(.)), ..., f_{n}(W_{n}x(.))] $ and $ g(W(.)) = [g_{1}(W_{1}x(.)), g_{2}(W_{2}x(.)), ..., g_{n}(W_{n}x(.))] $ denote the neural activation functions; $ W = [W_{1}^{T}, W_{2}^{T}, ..., W_{n}^{T}] $ is delayed connection weight matrix; $ \phi (t) \in C[[-h_2, 0], \mathbb{R}^n] $ is the initial function. The time-varying delay function $ h(t) $ satisfies the following conditions:

    $   0h1h(t)h2,h1h2,
    $
    (2.2)

    where $ h_{1}, h_{2} $ are the known real constant scalars.

    The neuron activation functions satisfy the following condition:

    Assumption 1. The neuron activation function $ f(\cdot) $ is continuous and bounded which satisfies:

    $ kifi(θ1)fi(θ2)θ1θ2k+i,    θ1,θ2R,  θ1θ2,  i=1,2,...,n,
    $
    (2.3)

    when $ \theta_2 = 0, $ Eq (2.3) can be rewritten as the following condition:

    $ kifi(θ1)θ1k+i,
    $
    (2.4)

    where $ f(0) = 0 $ and $ k_{i}^{-}, k_{i}^{+} $ are given constants.

    From (2.3) and (2.4), for $ i = 1, 2, ..., n, $ it follows that

    $ [fi(θ1)fi(θ2)ki(θ1θ2)][k+i(θ1θ2)fi(θ1)+fi(θ2)]0,
    $
    (2.5)
    $ [fi(θ1)kiθ1][k+iθ1fi(θ1)]0.
    $
    (2.6)

    Based on Assumption 1, there exists an equilibrium point $ x^{*} = [x^{*}_{1}(t), x^{*}_{2}(t), ..., x^{*}_{n}(t)]^{T} $ of neural network (2.1).

    To prove the main results, the following Definition, Proposition, Corollary and Lemmas are useful.

    Definition 1. [34] Given a positive matrix $ M $ and positive constants $ k_1, k_2, T_{f} $ with $ k_1 < k_2, $ the time-delay system described by (2.1) and delay condition as in (2.2) is said to be finite-time stable regarding to $ (k_1, k_2, T_{f}, h_1, h_2, M), $ if the state variables satisfy the following relationship:

    $ suph2s0{zT(s)Mz(s),˙zT(s)M˙z(s)}k1zT(t)Mz(t)<k2,  t[0,Tf].
    $

    Proposition 2. [34] For any positive definite matrix $ Q $, any differential function $ z:[bd_{L}, bd_{U}]\rightarrow \mathbb{R}^{n}. $ Then, the following inequality holds:

    $ 6bdULbdUbdL˙zT(s)Q˙z(s)dsˉζT[22Q10Q32Q16Q26Q58Q]ˉζ,
    $

    where $ \bar{\zeta}^T = \big[z(bd_U) \ \ z(bd_L) \ \ \frac{1}{bd_{UL}}\int_{bd_L}^{bd_U}z(s)ds \big] $ and $ bd_{UL} = bd_{U}-bd_{L}. $

    Lemma 3. [40] (Schur complement) Given constant symmetric matrices $ X, Y, Z $ satisfying $ X = X^{T} $ and $ Y = Y^{T} > 0, $ then $ X+Z^{T}Y^{-1}Z < 0 $ if and only if

    $ [XZTZY]<0,or[YZZTX]<0.
    $

    Corollary 4. [39] For a given symmetric matrix $ Q > 0, $ any vector $ \nu _{0} $ and matrices $ J_1, J_2, J_3, J_4 $ with proper dimensions and any continuously differentiable function $ z:[bd_{L}, bd_{U}]\rightarrow \mathbb{R}^{n} $, the following inequality holds:

    $ bdUbdLbdUδ˙zT(s)Q˙z(s)dsdδνT0(2J1Q1JT1+4J2Q1JT2)ν0+2νT0(2J1γ1+4J2γ2),bdUbdLδbdL˙zT(s)Q˙z(s)dsdδνT0(2J3Q1JT3+4J4Q1JT4)ν0+2νT0(2J3γ3+4J4γ4),
    $

    where $ bd_{UL} = bd_{U}-bd_{L}, $

    $ \gamma_1 = z(bd_U)-\frac{1}{bd_{UL}}\int_{bd_L}^{bd_U}z(s)ds, \ \ \gamma_2 = z(bd_U)+\frac{2}{bd_{UL}}\int_{bd_L}^{bd_U}z(s)ds -\frac{6}{(bd_{UL})^2}\int_{bd_L}^{bd_U}\int_{\delta}^{bd_U}z(s)ds d\delta, \\ \gamma_3 = \frac{1}{bd_{UL}}\int_{bd_L}^{bd_U}z(s)ds- z(bd_L), \ \ \gamma_4 = z(bd_L)-\frac{4}{bd_{UL}}\int_{bd_L}^{bd_U}z(s)ds+\frac{6}{(bd_{UL})^2}\int_{bd_L}^{bd_U}\int_{\delta}^{bd_U}z(s)ds d\delta. $

    Lemma 5. [39] For any matrix $ Q > 0 $ and differentiable function $ z:[bd_{L}, bd_U]\rightarrow \mathbb{R}^{n} $, such that the integrals are determined as follows:

    $ bdULbdUbdL˙zT(s)Q˙z(s)dsκT1Qκ1+3κT2Qκ2+5κT3Qκ3,
    $

    where $ \kappa_1 = z(bd_U)-z(bd_L), \ \ \kappa_2 = z(bd_U)+z(bd_L)-\frac{2}{bd_{UL}}\int_{bd_L}^{bd_U}z(s)ds $,

    $ \kappa_3 = z(bd_U)-z(bd_L)+\frac{6}{bd_{UL}}\int_{bd_L}^{bd_U}z(s)ds -\frac{12}{(bd_{UL})^2}\int_{bd_L}^{bd_U}\int_{\delta}^{bd_U}z(s)ds d\delta $ and $ bd_{UL} = bd_{U}-bd_{L}. $

    Lemma 6. [41] For any positive definite symmetric constant matrix $ Q $ and scalar $ \tau > 0 $, such that the following integrals are determined, it has

    $ 0τtt+δzT(s)Qz(s)dsdδ2τ2(0τtt+δz(s)dsdδ)TQ(0τtt+δz(s)dsdδ).
    $

    Let $ h_1, h_2 $ and $ \alpha $ be constants,

    $ h_{21} = h_2-h_1, \ \ h_{t1} = h(t)-h_1, \ \ h_{2t} = h_2-h(t), $

    $ \mathscr{N}_1 = \frac{1-e^{-\alpha h_1}}{\alpha}, \ \ \mathscr{N}_2 = \frac{1-e^{-\alpha h_2}}{\alpha}, \ \ \mathscr{N}_3 = \frac{1-(1+\alpha h_{1})e^{-\alpha h_1}}{\alpha ^{2}}, \ \ \mathscr{N}_4 = \frac{(1+\alpha h_{1})e^{-\alpha h_1}-(1+\alpha h_{2})e^{-\alpha h_2}}{\alpha ^{2}}, $ $ \ \ \mathscr{N}_5 = \frac{1-(1+\alpha h_{2})e^{-\alpha h_2}}{\alpha ^{2}}, $

    $ \ \ \mathscr{N}_6 = \frac{-3+2\alpha h_{2}+4e^{-\alpha h_2}-e^{-2\alpha h_2}}{4\alpha ^{3}}, \ \ \mathscr{N}_7 = \frac{-3+2\alpha h_{1}+4e^{-\alpha h_1}-e^{-2\alpha h_1}}{4\alpha ^{3}}, \ \ \mathscr{N}_8 = \frac{-3-2(2+\alpha h_{1})e^{-\alpha h_1}+e^{-2\alpha h_1}}{4\alpha ^{3}}, $

    $ \ \ \mathscr{N}_9 = \frac{4(\alpha h_{21}-1)e^{-\alpha h_1}-(2\alpha h_{21}-1)e^{-2\alpha h_1}+4e^{-\alpha h_2}-e^{-2\alpha h_2}}{4\alpha ^{3}}, \ \ \mathscr{N}_{10} = \frac{4e^{-\alpha h_1}-e^{-2\alpha h_1}-4e^{-\alpha h_2}+(1-2\alpha h_{21})e^{-2\alpha h_2}}{4\alpha ^{3}}, $

    $ I = M^{\frac{1}{2}}M^{-\frac{1}{2}} = M^{-\frac{1}{2}}M^{\frac{1}{2}}, \ \ \bar{P}_{i} = M^{-\frac{1}{2}}P_{i}M^{-\frac{1}{2}}, \ \ i = 1, 2, 3, ..., 6, \ \ \bar{Q}_{j} = M^{-\frac{1}{2}}Q_{j}M^{-\frac{1}{2}}, \ \ j = 1, 2, $

    $ \bar{R}_{k} = M^{-\frac{1}{2}}R_{k}M^{-\frac{1}{2}}, \ \ k = 1, 2, 3, \ \ \bar{S} = M^{-\frac{1}{2}}SM^{-\frac{1}{2}}, \ \ \bar{T}_{l} = M^{-\frac{1}{2}}T_{l}M^{-\frac{1}{2}}, \ \ l = 1, 2, 3, 4, $

    $ \ \ \mathscr{M} = \lambda_{min} \{ \bar{P_{i}}\}, \ \ i = 1, 2, 3, ..., 6, $

    $ \mathscr{N} = \lambda_{max}\{ \bar{P_{1}}\}+2\lambda_{max}\{ \bar{P_{2}}\}+\lambda_{max}\{ \bar{P_{3}}\}+2\lambda_{max}\{ \bar{P_{4}}\}+2\lambda_{max}\{ \bar{P_{5}}\}+\lambda_{max}\{ \bar{P_{6}}\} $

    $ \ \ +\mathscr{N}_{1}\lambda_{max}\{ \bar{Q_{1}}\}+\mathscr{N}_{2}\lambda_{max}\{ \bar{Q_{2}}\}+h_{1}\mathscr{N}_{3}\lambda_{max}\{ \bar{R_{1}}\}+h_{21}\mathscr{N}_{4}\lambda_{max}\{ \bar{R_{2}}\}+h_{2}\mathscr{N}_{5}\lambda_{max}\{ \bar{R_{3}}\} $

    $ \ \ +\mathscr{N}_{6}\lambda_{max}\{ \bar{S}\}+2\lambda_{max}\{ L_{1}\}+2\lambda_{max}\{ L_{2}\}+2\lambda_{max}\{ G_{1}\}+2\lambda_{max}\{ G_{2}\} $

    $ \ \ +\mathscr{N}_{7}\lambda_{max}\{ \bar{T_{1}}\}+\mathscr{N}_{8}\lambda_{max}\{ \bar{T_{2}}\}+\mathscr{N}_{9}\lambda_{max}\{ \bar{T_{3}}\}+\mathscr{N}_{10}\lambda_{max}\{ \bar{T_{4}}\}, $

    $ L_{1} = \sum_{i = 1}^{n}\lambda_{1i}, \ \ L_{2} = \sum_{i = 1}^{n}\lambda_{2i}, \ \ G_{1} = \sum_{i = 1}^{n}\gamma_{1i}, \ \ G_{2} = \sum_{i = 1}^{n}\gamma_{2i}. $

    The notations for some matrices are defined as follows:

    $ f(t) = f(Wx(t)) $ and $ g_{h}(t) = g(Wx(t-h(t))), $

    $ \mathscr{W}_{1}(t) = \frac{1}{h_1}\int_{t-h_1}^{t}x(s)ds, \ \ \mathscr{W}_{2}(t) = \frac{1}{h_{t1}}\int_{t-h(t)}^{t-h_1}x(s)ds, \ \ \mathscr{W}_{3}(t) = \frac{1}{h_{2t}}\int_{t-h_2}^{t-h(t)}x(s)ds, $

    $ \mathscr{W}_{4}(t) = \frac{1}{h_2}\int_{t-h_2}^{t}x(s)ds, \ \ \mathscr{W}_{5}(t) = \frac{1}{h_{2}}\int_{-h_2}^{0}\int_{t+s}^{t}x(\delta)ds d \delta, \ \ \mathscr{W}_{6}(t) = \frac{1}{h_{1}^{2}}\int_{t-h_1}^{t}\int_{\tau}^{t}x(s)ds d \tau, $

    $ \mathscr{W}_{7}(t) = \frac{1}{h_{t1}^{2}}\int_{t-h(t)}^{t-h_1}\int_{\tau}^{t-h_1}x(s)ds d \tau, \ \ \mathscr{W}_{8}(t) = \frac{1}{h_{2t}^{2}}\int_{t-h_2}^{t-h(t)}\int_{\tau}^{t-h(t)}x(s)ds d \tau, $

    $ \varpi_1(t) = [x^{T}(t) \ \ x^{T}(t-h_1) \ \ x^{T}(t-h(t)) \ \ x^{T}(t-h_2) \ \ f^{T}(t) \ \ g_{h}^{T}(t)]^{T}, $

    $ \varpi_2(t) = [\mathscr{W}_{1}^{T}(t) \ \ \mathscr{W}_{2}^{T}(t) \ \ \mathscr{W}_{3}^{T}(t) \ \ \mathscr{W}_{4}^{T}(t) \ \ \mathscr{W}_{5}^{T}(t) \ \ \dot{x}^{T}(t) \ \ \mathscr{W}_{6}^{T}(t) \ \ \mathscr{W}_{7}^{T}(t) \ \ \mathscr{W}_{8}^{T}(t)]^{T}, $

    $ \varpi = [\varpi_1^{T}(t) \ \ \varpi_2^{T}(t)]^{T}, $

    $ D_{1} = diag \{ k_{11}^{+}, k_{21}^{+}, ..., k_{n1}^{+} \}, D_{2} = diag \{ k_{12}^{+}, k_{22}^{+}, ..., k_{n2}^{+} \} $ and $ D = \max \{D_{1}, D_{2} \}, $

    $ E_{1} = diag \{ k_{11}^{-}, k_{21}^{-}, ..., k_{n1}^{-} \}, E_{2} = diag \{ k_{12}^{-}, k_{22}^{-}, ..., k_{n2}^{-} \} $ and $ E = \max \{E_{1}, E_{2} \}, $

    $ \zeta_{1}(t) = [x^{T}(t) \ \ x^{T}(t-h_1) \ \ \mathscr{W}^{T}_{1}(t)], \ \ \zeta_{2}(t) = [x^{T}(t-h_1) \ \ x^{T}(t-h(t)) \ \ \mathscr{W}^{T}_{2}(t)], $

    $ \zeta_{3}(t) = [x^{T}(t-h(t)) \ \ x^{T}(t-h_2) \ \ \mathscr{W}^{T}_{3}(t)], \ \ \zeta_{4}(t) = [x^{T}(t) \ \ x^{T}(t-h_{2}) \ \ \mathscr{W}^{T}_{4}(t)], $

    $ \mathscr{G}_{1} = x(t)-\mathscr{W}_{1}(t), \ \ \mathscr{G}_{2} = x(t)+2\mathscr{W}_{1}(t)-6\mathscr{W}_{6}(t), $

    $ \mathscr{G}_{3} = \mathscr{W}_{1}(t)-x(t-h_{1}), \ \ \mathscr{G}_{4} = x(t-h_{1})-4\mathscr{W}_{1}(t)+6\mathscr{W}_{6}(t), $

    $ \mathscr{G}_{5} = x(t-h_{1})-\mathscr{W}_{2}(t), \ \ \mathscr{G}_{6} = x(t-h_{1})+2\mathscr{W}_{2}(t)-6\mathscr{W}_{7}(t), $

    $ \mathscr{G}_{7} = x(t-h(t))-\mathscr{W}_{3}(t), \ \ \mathscr{G}_{8} = x(t-h(t))+2\mathscr{W}_{3}(t)-6\mathscr{W}_{8}(t), $

    $ \mathscr{G}_{9} = \mathscr{W}_{2}(t)-x(t-h(t)), \ \ \mathscr{G}_{10} = x(t-h(t))-4\mathscr{W}_{2}(t)+6\mathscr{W}_{7}(t), $

    $ \mathscr{G}_{11} = \mathscr{W}_{3}(t)-x(t-h_{2}), \ \ \mathscr{G}_{12} = x(t-h_{2})-4\mathscr{W}_{3}(t)+6\mathscr{W}_{8}(t). $

    Let us consider a LKF for stability criterion for network (2.1) as the following equation:

    $ V(t,xt)=10i=1Vi(t,xt),
    $
    (3.1)

    where

    $ V1(t,xt)=xT(t)P1x(t)+2xT(t)P2tth2x(s)ds+(tth2x(s)ds)TP3tth2x(s)ds+2xT(t)P40h2tt+sx(δ)dδds+2(tth2x(s)ds)TP5×0h2tt+sx(δ)dδds+(0h2tt+sx(δ)dδds)TP60h2tt+sx(δ)dδds,V2(t,xt)=tth1eα(st)xT(s)Q1x(s)ds,V3(t,xt)=tth2eα(st)xT(s)Q2x(s)ds,V4(t,xt)=h10h1tt+seα(st)˙xT(δ)R1˙x(δ)dδds,V5(t,xt)=h21h1h2tt+seα(st)˙xT(δ)R2˙x(δ)dδds,V6(t,xt)=h20h2tt+seα(st)˙xT(δ)R3˙x(δ)dδds,V7(t,xt)=0h20τtt+seα(δ+st)˙xT(δ)S˙x(δ)dδdsdτ,V8(t,xt)=2eαtni=1Wix0[λ1i(σ+isfi(s))+λ2i(fi(s)σis)]ds,V9(t,xt)=2eαtni=1Wix0[γ1i(η+isgi(s))+γ2i(gi(s)ηis)]ds,V10(t,xt)=0h10τtt+seα(δ+st)˙xT(δ)T1˙x(δ)dδdsdτ+0h1τh1tt+seα(δ+st)˙xT(δ)T2˙x(δ)dδdsdτ+h1h2h1τtt+seα(δ+st)˙xT(δ)T3˙x(δ)dδdsdτ+h1h2τh2tt+seα(δ+st)˙xT(δ)T4˙x(δ)dδdsdτ.
    $

    Next, we will show that the LKF (3.1) is positive definite as follows:

    Proposition 7. Consider an $ \alpha > 0 $. The LKF (3.1) is positive definite, if there exist matrices $ Q_i > 0, (i = 1, 2), $ $ R_j > 0, (j = 1, 2, 3) $, $ T_k > 0, (k = 1, 2, 3, 4) $, $ S > 0 $ and any matrices $ P_1 = P_1^T $, $ P_3 = P_3^T $, $ P_6 = P_6^T $, $ P_2 $, $ P_4 $, $ P_5 $, such that the following LMI holds:

    $ H=[H11H12H13H22P5H33]>0,
    $
    (3.2)

    where

    $ H11=P1+h2e2αh2R3+0.5h2e2αh2S,H12=P2e2αh2R3,  H13=P4h12e2αh2S,H22=P3+h12e2αh2(R3+Q2),  H33=P6+h32e2αh2(S+ST).
    $

    Proof. We let $ z_1(t) = h_{2}\mathscr{W}_{4}(t) $, $ z_2(t) = h_{2}\mathscr{W}_{5}(t) $, then

    $ V1(t,xt)=xT(t)P1x(t)+2xT(t)P2z1(t)+zT1(t)P3z1(t)+2xT(t)P4z2(t)+2zT1(t)P5z2(t)+zT2(t)P6z2(t),V3(t,xt)e2αh2tth2xT(s)Q2x(s)ds=h12e2αh2zT1(t)Q2z1(t),V6(t,xt)h2e2αh20h2tt+s˙xT(δ)R3˙x(δ)dδdsh2e2αh20h2s1(tt+s˙x(δ)dδ)TR3(tt+s˙x(δ)dδ)dseαh20h2[x(t)x(t+s)]TR3[x(t)x(t+s)]ds=[x(t)z1(t)]T[h2eαh2R3eαh2R3h12eαh2R3][x(t)z1(t)],V7(t,xt)eαh20h20τtt+s˙xT(δ)S˙x(δ)dδdsdτeαh20h20τs1(tt+s˙x(δ)dδ)TS(tt+s˙x(δ)dδ)dsdτh12eαh20h20τ[x(t)x(t+s)]TS[x(t)x(t+s)]dsdτ=[x(t)z2(t)]T[0.5h2e2αh2Sh12e2αh2Sh32e2αh2(S+ST)][x(t)z2(t)].
    $

    Combining with $ V_{2}(t, x_t) $, $ V_{4}(t, x_t) $, $ V_{5}(t, x_t) $, $ V_{8}(t, x_t)-V_{10}(t, x_t) $, it follows that if the LMIs (3.2) holds, the LKF (3.1) is positive definite.

    Remark 8. It is worth noting that most of previous paper[1,2,3,4,5,6,7,15,20], the Lyapunov martices $ P_1 $, $ P_3 $ and $ P_6 $ must be positive definite. In our work, we remove this restriction by utilizing the technique of constructing complicated Lyapunov $ V_{1}(t, x_t) $, $ V_{3}(t, x_t) $, $ V_{6}(t, x_t) $ and $ V_{7}(t, x_t) $ as shown in the proof of Proposition 7, therefore, $ P_1 $, $ P_3 $ and $ P_6 $ are only real matrices. We can see that our work are less conservative and more applicable than aforementioned works.

    Theorem 9. Given a positive matrix $ M > 0 $, the time-delay system described by (2.1) and delay condition as in (2.2) is said finite-time stable with respect to $ (k_1, k_2, T_f, h_1, h_2, M), $ if there exist symmetric positive definite matrices $ Q_i > 0 $, $ (i = 1, 2) $, $ R_j > 0 \ (j = 1, 2, 3) $, $ T_k > 0 \ (k = 1, 2, 3, 4) $, $ K_l > 0 \ (l = 1, 2, 3, ..., 10) $, diagonal matrices $ S > 0, \ \ H_{m} > 0, \ \ m = 1, 2, 3 $, and matrices $ P_1 = P_1^T $, $ P_3 = P_3^T $, $ P_6 = P_6^T $, $ P_2 $, $ P_4 $, $ P_5 $ such that the following LMIs hold:

    $ H=[H11H12H13H22P5H33]>0,
    $
    (3.3)
    $ Ω1=[Ω1,1Ω1,2Ω2,2]<0,
    $
    (3.4)
    $ Ω1,1=[Π1,1Π1,2Π1,3Π1,4Π1,5Π1,6Π1,70Π2,2Π2,3000Π2,7Π2,8Π3,3Π3,4Π3,5Π3,60Π3,8Π4,40000Π5,5Π5,600Π6,600Π7,70Π8,8]<0,
    $
    (3.5)
    $ Ω1,2=[0Ξ1,2Ξ1,3Ξ1,400000000Ξ2,60Ξ3,100Ξ3,40Ξ3,6Ξ3,7Ξ4,1Ξ4,2Ξ4,3000Ξ4,7000Ξ5,4000000Π6,4000000000000000Ξ8,60]<0,
    $
    (3.6)
    $ Ω2,2=[Σ1,100000Σ1,70Σ2,2Σ2,3Σ2,400000Σ3,3Ξ3,4000000Σ4,40000000Σ5,50000000Σ6,60000000Σ7,7]<0,
    $
    (3.7)
    $ Ω2=diag{χi}<0,         
    $
    (3.8)

    where $ i = 1, 2, 3, ..., 12, \ \ b_{1} = \frac{1}{6}, \ \ b_{2} = \frac{1}{h_{t1}}, \ \ b_{3} = \frac{1}{h_{2t}}, $

    $ \chi_{1} = -2 e^{2 \alpha h_{1}}T_{1} $, $ \ \ \chi_{2} = -4 e^{2 \alpha h_{1}}T_{1} $, $ \ \ \chi_{3} = -2 e^{2 \alpha h_{1}}T_{2} $, $ \ \ \chi_{4} = -4 e^{2 \alpha h_{1}}T_{2} $, $ \ \ \chi_{5} = \chi_{7} = -2 e^{2 \alpha h_{2}}T_{3} $, $ \ \ \chi_{6} = \chi_{8} = -4 e^{2 \alpha h_{2}}T_{3} $, $ \ \ \chi_{9} = \chi_{11} = -2 e^{2 \alpha h_{2}}T_{4} $, $ \ \ \chi_{10} = \chi_{12} = -4 e^{2 \alpha h_{2}}T_{4} $,

    and

    $ Nk1Mk2eαTf,
    $
    (3.9)
    $ H11=P1+h2e2αh2R3+0.5h2e2αh2S,H12=P2e2αh2R3,  H13=P4h12e2αh2S,H22=P3+h12e2αh2(R3+Q2),  H33=P6+h32e2αh2(S+ST),Π1,1=P1AATP1+2P2+2h2P4+Q1+Q222eαh1R1b122eαh2R3b12e2αh2S2QA2WTET1HT1D1W2WTETH3DWαP1+4K1e2αh1+8K2e2αh1,Π1,2=10eαh1R1b1,  Π1,3=WTETHT3DW+WTDTHT3EW,Π1,4=P210eαh2R3b1,Π1,5=P1B+QB+WTDT1HT1+WTET1H1+WTDTHT3+WTETH3,Π1,6=P1C+QCWTDTHT3WTETH3,  Π1,7=32eαh1R1b1,Π2,2=eαh1Q116eαh1R1b122eαh2R2b14K3e2αh1+8K4e2αh19h2e2αh2T3b2+4K5e2αh2+8K6e2αh2,Π2,3=10eαh2R2b1+3h2te2αh2T3b2,  Π2,7=26eαh1R1b1,Π2,8=32eαh2R2b124h2te2αh2T3b2,Π3,3=16eαh2R2b122eαh2R2b12WTET2H2D2W2WTETH3DW9h2te2αh2T3b2+4K7e2αh2+8K8e2αh29ht1e2αh2T4b34K9e2αh2+8K10e2αh2,Π3,4=10eαh2R2b1+3ht1e2αh2T4b3,  Π3,5=WTDTHT3WTETH3,Π3,6=WTDT2HT2+WTET2H2+WTDTHT3+WTETH3,Π3,8=26eαh2R2b1+36h2te2αh2T3b3,Π4,4=eαh2Q216eαh2R2b116eαh2R3b19ht1e2αh2T4b34K11e2αh2+8K12e2αh2,Π5,5=2H12H3,  Π6,6=2H22H3,Π7,7=58eαh1R1b14K1e2αh1+16K2e2αh1+4K3e2αh132K4e2αh1,Π8,8=58eαh2R2b1192h2te2αh2T3b24K5e2αh2+16K6e2αh2+4K9e2αh232K10e2αh2,Ξ1,2=h2P3h2P4+h22PT5+32eαh2R3b1+e2αh2Sαh2P2,Ξ1,3=h2P5+h22P6αh2P4,  Ξ1,4=WTDT1L1WWTET1L2WQATQT,Ξ2,6=60h2te2αh2T3b3,  Ξ3,1=32eαh2R2b124e2αh2T4,Ξ3,4=WTDT2G1WWTET2G2W,  Ξ3,6=60h2te2αh2T3b2,  Ξ3,7=60ht1e2αh2T4b3,Ξ4,1=26e2αh2R2b1+36ht1e2αh2T4b3,  Ξ4,2=h2P3+26e2αh2R3b1,Ξ4,3=h2P5,  Ξ4,7=60ht1e2αh2T4b3,  Ξ5,4=L1W+L2W+BTQT,Ξ6,4=G1W+G2W+CTQT,  Ξ8,6=360h2te2αh2T3b2,Σ1,1=58e2αh2R2b14K7e2αh2+16K8e2αh2192h1te2αh2T4b3+4K11e2αh232K12e2αh2,Σ1,7=360h1e2αh2T4b3,  Σ2,2=h22P558eαh2R3b12e2αh2Sαh22P3,Σ2,3=h22P6αh22P5,  Σ2,4=h2P2,  Σ3,3=αh22P6,  Σ3,4=h2P4,Σ4,4=h21R1+h221R2+h22R3+3h22Sb12Q+3h21(T1+T2)b1+3h221(T3+T4)b1,Σ5,5=48K2e2αh1+48K4e2αh1,Σ6,6=720h2te2αh2T3b248K6e2αh2+48K10e2αh2,Σ7,7=48K8e2αh2720ht1e2αh2T4b3+48K12e2αh2.
    $

    Proof. Let us choose the LKF defined as in (3.1). By Proposition 7, it is easy to check that

    $ Mx(t)2V(t,xt),  t0  and  V(0,x0)Nϕ(t)2.
    $

    Taking the derivative of $ V_i(t, x_t), i = 1, 2, 3, ..., 10 $ along the solution of the network (2.1), we get

    $ ˙V1(t,xt)=2xT(t)AP1x(t)+2xT(t)P1Bf(t)+2xT(t)P1Cgh(t)+2xT(t)P2[x(t)x(th2)]+2h2WT4(t)P2˙x(t)+2h2[x(t)x(th2)]TP3W4(t)+2h2xT(t)P4[x(t)W4(t)]
    $
    (3.10)
    $ +2h2WT5(t)P4˙x(t)+2h22WT4(t)P5[x(t)W4(t)]+2h2[x(t)x(th2)]TP5W5(t)+2h22[x(t)W4(t)]TP6W5(t),˙V2(t,xt)=xT(t)Q1x(t)eαh1xT(th1)Q1x(th1)αV2(t,xt),˙V3(t,xt)=xT(t)Q2x(t)eαh2xT(th2)Q2x(th2)αV3(t,xt),˙V4(t,xt)h21˙xT(t)R1˙x(t)h1eαh1tth1˙xT(s)R1˙x(s)dsαV4(t,xt),˙V5(t,xt)h221˙xT(t)R2˙x(t)h21eαh2th1th2˙xT(s)R2˙x(s)dsαV5(t,xt),˙V6(t,xt)h22˙xT(t)R3˙x(t)h2eαh2tth2˙xT(s)R3˙x(s)dsαV6(t,xt),˙V7(t,xt)h22˙xT(t)S˙x(t)e2αh20h2tt+τ˙xT(s)S˙x(s)dsdταV7(t,xt),
    $
    (3.11)
    $ ˙V8(t,xt)2[L1(D1WxT(t)f(WxT(t)))+L2(f(WxT(t)))E1WxT(t)]W˙x(t)αV8(t,xt),˙V9(t,xt)2[G1(D2WxT(th(t))g(WxT(th(t))))+2G2(g(WxT(th(t))))E2WxT(th(t))]W˙x(t)αV9(t,xt),˙V10(t,xt)=h212˙xT(t)[T1+T2]˙x(t)+h2212˙xT(t)[T3+T4]˙x(t)e2αh1tth1tτ˙xT(s)T1˙x(s)dsdτe2αh1tth1τth1˙xT(s)T2˙x(s)dsdτe2αh2th1th2th1τ˙xT(s)T3˙x(s)dsdτe2αh2th1th2τth2˙xT(s)T4˙x(s)dsdταV10(t,xt).
    $

    Define

    $ χi=[22Ri10Ri32Ri16Ri26Ri  58Ri],  i=1,2,3,4.
    $

    Applying Proposition 2, we obtain

    $ h1eαh1tth1˙xT(s)R1˙x(s)dseαh16ζT1(t)χ1ζ1(t),
    $
    (3.12)
    $ h21eαh2th2th1˙xT(s)R2˙x(s)dseαh26ζT2(t)χ2ζ2(t)eαh26ζT3(t)χ3ζ3(t),
    $
    (3.13)
    $ h2eαh2tth2˙xT(s)R3˙x(s)dseαh26ζT4(t)χ4ζ4(t).
    $
    (3.14)

    Applying Lemma 6, this leads to

    $ eαh20h2tt+τ˙xT(s)S˙x(s)dsdτ2h22e2αh2[x(t)W4(t)]TS[x(t)W4(t)].
    $

    From Corollary 4, we have

    $ eαh1tth1tτ˙xT(s)T1˙x(s)dsdτ2eαh1ϖT(t)[K1T11KT1+2K2T11KT2+2K1G1+4K2G2]ϖ(t),eαh1τth1tth1˙xT(s)T2˙x(s)dsdτ2eαh1ϖT(t)[K3T12KT3+2K4T12KT4+2K3G3+4K4G4]ϖ(t),eαh2th1th2th1τ˙xT(s)T3˙x(s)dsdτh2teαh2th1th(t)˙xT(s)T3˙x(s)ds+2eαh2ϖT(t)[K5T13KT5+2K6T13KT6+2K5G5+4K6G6]ϖ(t)+2eαh2ϖT(t)[K7T13KT7+2K8T13KT8+2K7G7+4K8G8]ϖ(t),eαh2th1th2τth2˙xT(s)T4˙x(s)dsdτht1eαh2th(t)th2˙xT(s)T4˙x(s)ds+2eαh2ϖT(t)[K9T14KT9+2K10T14KT10+2K9G9+4K10G10]ϖ(t)+2eαh2ϖT(t)[K11T14KT11+2K12T14×KT12+2K11G11+4K12G12]ϖ(t).
    $
    (3.15)

    By Lemma 5, we obtain

    $ h2teαh2th1th(t)˙xT(s)T3˙x(s)dsht1eαh2th1th(t)˙xT(s)T4˙x(s)dsh2tht1eαh2([(x(th1)x(th(t))]TT3[x(th1)x(th(t))]+3[x(th1)+x(th(t))2W2(t)]TT3[x(th1)+x(th(t))2W2(t)]+5[x(th1)x(th(t))+6W2(t)12W7(t)]TT3×[x(th1)x(th(t))+6W2(t)12W7(t)])ht1h2teαh2([x(th(t))x(th2))]TT4[x(th(t))x(th2)]+3[x(th(t))+x(th2)2W3(t)]TT4[x(th(t))+x(th2)2W3(t)]+5[x(th(t))x(th2)+6W3(t)12W8(t)]TT4×[x(th(t))x(th2)+6W3(t)12W8(t)]).
    $
    (3.16)

    Taking the assumption of activation functions (2.5) and (2.6) for any diagonal matrices $ H_1, H_2, H_3 > 0, $ it follows that

    $ 2[f(t)E1Wx(t)]TH1[D1Wx(t)f(t)]0,2[gh(t)E2Wx(th(t))]TH2[D2Wx(th(t))gh(t)]0,2[f(t)gh(t)E(Wx(t)Wx(th(t)))]T×H3[D(Wx(t)Wx(th(t)))f(t)+gh(t)]0.
    $
    (3.17)

    Multiply (2.1) by $ (2Qx(t)+2Q\dot{x}(t))^{T}, $ we have the following identity:

    $ 2xT(t)Q˙x(t)2xT(t)QAx(t)+2xT(t)QBf(t)+2xT(t)QCgh(t)       2˙x(t)Q˙x(t)2˙x(t)QAx(t)+2˙x(t)QBf(t)+2˙x(t)QCgh(t)=0.
    $
    (3.18)

    From (3.10)-(3.18), it can be obtained

    $ ˙V(t,xt)+αV(t,xt)ϖT(t)[Ω1+Ω2]ϖ(t),
    $

    where $ \Omega _{1} $ and $ \Omega _{2} $ are given in Eqs (3.4) and (3.8). Since $ \Omega _{1} < 0 $ and $ \Omega _{2} < 0 $, $ \dot{V}(t, x_t)+\alpha V(t, x_t)\leq 0 $, then, we have

    $ ˙V(t,xt)αV(t,xt),t0.
    $
    (3.19)

    Integrating both sides of (3.19) from $ 0 $ to $ t $ with $ t \in [0, T_{f}] $, we obtain

    $ V(t,xt)V(0,x0)e2αt,t0.
    $

    with

    $ V1(0,x0)=xT(0)P1x(0)+2h2xT(0)P2WT4(0)+h22W4(0)P3W4(0)+2h2xT(0)P4W5(0)+2h22WT4(0)P5W5(0)+h22P5WT5(0)P6W5(0),V2(0,x0)=0h1eαsxT(s)Q1x(s)ds,V3(0,x0)=0h2eαsxT(s)Q2x(s)ds,V4(0,x0)=h10h10seαs˙xT(δ)R1˙x(δ)dδds,V5(0,x0)=h21h1h20seαs˙xT(δ)R2˙x(δ)dδds,V6(0,x0)=h20h20seαs˙xT(δ)R3˙x(δ)dδds,V7(0,x0)=0h20τ0seα(δ+s)˙xT(δ)S˙x(δ)dδdsdτ,V8(0,x0)=2ni=1Wix0[λ1i(σ+isfi(s))+λ2i(fi(s)σis)]ds,V9(0,x0)=2ni=1Wix0[γ1i(η+isgi(s))+γ2i(gi(s)ηis)]ds,V10(0,x0)=0h10τ0seα(δ+s)˙xT(δ)T1˙x(δ)dδdsdτ+0h1τh10seα(δ+s)˙xT(δ)T2˙x(δ)dδdsdτ+h1h2h1τ0seα(δ+s)˙xT(δ)T3˙x(δ)dδdsdτ+h1h2τh2s0eα(δ+s)˙xT(δ)T4˙x(δ)dδdsdτ.
    $

    Let $ I = M^{\frac{1}{2}}M^{-\frac{1}{2}} = M^{-\frac{1}{2}}M^{\frac{1}{2}}, \ \ \bar{P}_{i} = M^{-\frac{1}{2}}P_{i}M^{-\frac{1}{2}}, \ \ i = 1, 2, 3, ..., 6, $

    $ \bar{Q}_{j} = M^{-\frac{1}{2}}Q_{j}M^{-\frac{1}{2}}, \ \ j = 1, 2, \ \ \bar{R}_{k} = M^{-\frac{1}{2}}R_{k}M^{-\frac{1}{2}}, \ \ k = 1, 2, 3, \ \ \bar{T}_{l} = M^{-\frac{1}{2}}T_{l}M^{-\frac{1}{2}}, l = 1, 2, 3, 4. $ Therefore,

    $ V(0,x0)=xT(0)M12ˉP1M12x(0)+2h2xT(0)M12ˉP2M12W4(0)+h22W4(0)M12ˉP3M12W4(0)+2h2xT(0)M12ˉP4M12W5(0)+2h22WT4(0)M12ˉP5M12W5(0)+h22P5WT5(0)M12ˉP6M12W5(0)+0h1eαsxT(s)M12ˉQ1M12x(s)ds+0h2eαsxT(s)M12ˉQ2M12x(s)ds+h10h10seαs˙xT(δ)M12ˉR1M12˙x(δ)dδds+h21h1h20seαs˙xT(δ)M12ˉR2M12˙x(δ)dδds+h20h20seαs˙xT(δ)M12ˉR3M12˙x(δ)dδds+0h20τ0seα(δ+s)˙xT(δ)M12ˉSM12˙x(δ)dδdsdτ+2[L1(D1WxT(0)f(WxT(0)))+L2(f(WxT(0))E1WxT(0)]+2[G1(D2WxT(0)g(WxT(0)))+G2(g(WxT(0))E2WxT(0)]+0h10τ0seα(δ+s)˙xT(δ)M12ˉT1M12˙x(δ)dδdsdτ+0h1τh10seα(δ+s)˙xT(δ)M12ˉT2M12˙x(δ)dδdsdτ+h1h2h1τ0seα(δ+s)˙xT(δ)M12ˉT3M12˙x(δ)dδdsdτ+h1h2τh2s0eα(δ+s)˙xT(δ)M12ˉT4M12˙x(δ)dδdsdτ,k1[λmax{¯P1}+2λmax{¯P2}+λmax{¯P3}+2λmax{¯P4}+2λmax{¯P5}+λmax{¯P6}+N1λmax{¯Q1}+N2λmax{¯Q2}+h1N3λmax{¯R1}+h21N4λmax{¯R2}+h2N5λmax{¯R3}+N6λmax{ˉS}+2λmax{L1}+2λmax{L2}+2λmax{G1}+2λmax{G2}+N7λmax{¯T1}+N8λmax{¯T2}+N9λmax{¯T3}+N10λmax{¯T4}].
    $

    Since $ V(t, x_{t}) \geq V_{1}(t, x_{t}) $, we have

    $ V(t,xt)xT(t)ˉP1Mx(t)+2h2xT(t)ˉP2MW4(t)+h22WT4(t)ˉP3MW4(t)+2h2xT(t)ˉP4MW5(t)+2h22WT4(t)ˉP5MWT5(t)+h22WT5(t)ˉP6MW5(t),λmin(¯Pi)xT(t)Mx(t),  i=1,2,3,4,5,6.
    $

    For any $ t \in [0, T_{f}], $ it follows that,

    $ xT(t)Mx(t)k1eαTfλmin(ˉPi)[λmax{¯P1}+2λmax{¯P2}+λmax{¯P3}+2λmax{¯P4}+2λmax{¯P5}+λmax{¯P6}+N1λmax{¯Q1}+N2λmax{¯Q2}+h1N3λmax{¯R1}+h21N4λmax{¯R2}+h2N5λmax{¯R3}+N6λmax{ˉS}+2λmax{L1}+2λmax{L2}+2λmax{G1}+2λmax{G2}+N7λmax{¯T1}+N8λmax{¯T2}+N9λmax{¯T3}+N10λmax{¯T4}]<k2.
    $

    This shows that the condition (3.9) holds. Therefore, the delayed neural network described by (2.1) and delay condition as in (2.2) is said finite-time stable with respect to $ (k_1, k_2, T_f, h_1, h_2, M). $

    Remark 10. The condition (3.9) is not standard form of LMIs. To verify that this condition is equivalent to the relation of LMI, it needs to apply Schur's complement lemma in Lemma 1 and let $ \mathscr{B}_{i}, \ i = 1, 2, 3, ..., 21 $ be some positive scalars with

    $ B1=λmin{¯Pi},  i=1,2,3,...,6,B2=λmax{¯P1},  B3=λmax{¯P2},  B4=λmax{¯P3},  B5=λmax{¯P4},B6=λmax{¯P5},  B7=λmax{¯P6},  B8=λmax{¯Q1},  B9=λmax{¯Q2},B10=λmax{¯R1},  B11=λmax{¯R2},  B12=λmax{¯R3},  B13=λmax{ˉS},B14=λmax{L1},  B15=λmax{L2},  B16=λmax{G1},  B17=λmax{G2},B18=λmax{¯T1},  B19=λmax{¯T2},  B20=λmax{¯T3},  B21=λmax{¯T4}.
    $

    Let us define the following condition

    $ k1[B2+2B3+B4+2B5+2B6+B7+N1B8+N2B9+h1N3B10                +h21N4B11+h2N5B12+N6B13+2B14+2B15+2B16+2B17              +N7B18+N8B19+N9B20+N10B21]<k2B1eαTf.
    $

    It follows that condition (3.9) is equivalent to the relations and LMIs as follows:

    $ B1I<¯P1<B2I,  0<¯P2<B3I,  0<¯P3<B4I,  0<¯P4<B5I,0<¯P5<B6I,  0<¯P6<B7I,  0<¯Q1<B8I,  0<¯Q2<B9I,  0<¯R1<B10I,  0<¯R2<B11I,  0<¯R3<B12I,  0<ˉS<B13I,0<L1<B14I,  0<L2<B15I,  0<G1<B16I,  0<G2<B17I,0<¯T1<B18I,  0<¯T2<B19I,  0<¯T3<B20I,  0<¯T4<B21I,
    $
    (3.20)
    $ 1=[1,11,21,32,203,3]<0,
    $
    (3.21)
    $ 1,1=[ψ1,1ψ1,2ψ1,3ψ1,4ψ1,5ψ1,6ψ1,7B200000B30000B4000B500B60B7],
    $
    (3.22)
    $ 1,2=[ψ1,8ψ1,9ψ1,10ψ1,11ψ1,12ψ1,13ψ1,14000000000000000000000],
    $
    (3.23)
    $ 1,3=[ψ1,15ψ1,16ψ1,17ψ1,18ψ1,19ψ1,20ψ1,21000000000000000000000],
    $
    (3.24)
    $ 2,2=[B8000000B900000B100000B11000B1200B130B14],
    $
    (3.25)
    $ 3,3=[B15000000B1600000B170000B18000B1900B200B21],
    $
    (3.26)

    where $ I \in \mathbb{R}^{n \times n} $ is an identity matrix, $ \psi_{1, 1} = -\mathscr{B}_{1}k_{2}e^{-\alpha T_{f}}, \ \psi_{1, 2} = \mathscr{B}_{2}\sqrt{k_{1}}, \ \psi_{1, 3} = \mathscr{B}_{3}\sqrt{2k_{1}}, \ \psi_{1, 4} = \mathscr{B}_{4}\sqrt{k_{1}}, \ \psi_{1, 5} = \mathscr{B}_{5}\sqrt{2k_{1}}, \ \psi_{1, 6} = \mathscr{B}_{6}\sqrt{2k_{1}}, \ \psi_{1, 7} = \mathscr{B}_{7}\sqrt{k_{1}}, \ \psi_{1, 8} = \mathscr{B}_{8}\sqrt{k_{1}\mathscr{N}_{1}}, \ \psi_{1, 9} = \mathscr{B}_{9}\sqrt{k_{1}\mathscr{N}_{2}}, \ \psi_{1, 10} = \mathscr{B}_{10}\sqrt{k_{1}h_{1}\mathscr{N}_{3}}, \ \psi_{1, 11} = \mathscr{B}_{11}\sqrt{k_{1}h_{21}\mathscr{N}_{4}}, \ \psi_{1, 12} = \mathscr{B}_{12}\sqrt{k_{1}h_{2}\mathscr{N}_{5}}, \ \psi_{1, 13} = \mathscr{B}_{13}\sqrt{k_{1}\mathscr{N}_{6}}, \ \psi_{1, 14} = \mathscr{B}_{14}\sqrt{2k_{1}}, \ \psi_{1, 15} = \mathscr{B}_{15}\sqrt{2k_{1}}, \ \psi_{1, 16} = \mathscr{B}_{16}\sqrt{2k_{1}}, \ \psi_{1, 17} = \mathscr{B}_{17}\sqrt{2k_{1}}, \ \psi_{1, 18} = \mathscr{B}_{18}\sqrt{k_{1}\mathscr{N}_{7}}, \ \psi_{1, 19} = \mathscr{B}_{19}\sqrt{k_{1}\mathscr{N}_{8}}, \ \psi_{1, 20} = \mathscr{B}_{20}\sqrt{k_{1}\mathscr{N}_{9}}, \ \psi_{1, 21} = \mathscr{B}_{21}\sqrt{k_{1}\mathscr{N}_{10}}. $

    Corollary 11. Given a positive matrix $ M > 0 $, the time-delay system described by (2.1) and delay condition as in (2.2) is said finite-time stable with respect to $ (k_1, k_2, T_f, h_1, h_2, M), $ if there exist symmetric positive definite matrices $ Q_i > 0 $, $ (i = 1, 2) $, $ R_j > 0 \ (j = 1, 2, 3) $, $ T_k > 0 \ (k = 1, 2, 3, 4) $, $ K_l > 0 \ (l = 1, 2, 3, ..., 10) $, diagonal matrices $ S > 0, \ \ H_{m} > 0, \ \ m = 1, 2, 3 $, and matrices $ P_1 = P_1^T $, $ P_3 = P_3^T $, $ P_6 = P_6^T $, $ P_2 $, $ P_4 $, $ P_5 $ and positive scalars $ \alpha, \ \ \mathscr{B}_{i}, \ \ 1, 2, 3, ..., 21 $ such that LMIs and inequalities (3.3)-(3.8), (3.20)-(3.26).

    Remark 12. If the delayed NNs as in (2.1) are choosing as $ B = W_{0}, C = W_{1}, W = W_{2}, $ then the system turns into the delayed NNs proposed in [23],

    $ ˙x(t)=Ax(t)+W0f(W2x(t))+W1g(W2x(th(t))),
    $
    (3.27)

    where $ 0 \leq h(t) \leq h_M $ and $ \dot{h} (t) \leq h_D, $ it follows that (3.28) is the special case of the delayed NNs in (2.1).

    Remark 13. Replacing $ W_{0} = B, W_{1} = C, W_{2} = W, d_{1}(t) = d(t) = h(t) $ and $ d_{2}(t) = 0 $ and external constant input is equal to zero in Eq (1) of the delayed NNs as had been done in [16], we have

    $ ˙x(t)=Ax(t)+Bf(Wx(t))+Cg(Wx(th(t))),
    $
    (3.28)

    then (3.28) is the same NNs as in (2.1) that (2.1) is the particular case of the delayed NNs in [16].

    Remark 14. If we choose $ B = 0, C = 1 $ and $ g = f $ and constant input is equal to zero in the delayed NNs in (2.1), then it can be rewritten as

    $ ˙x(t)=Ax(t)+f(Wx(th(t))),
    $
    (3.29)

    then (3.29) is the special case of the NNs as in (2.1) which has been done in [2,3,4,5,6,10,12,13,20].

    Remark 15. If we set $ B = W_0, C = W_1 $ and $ W = 1 $ and constant input is equal to zero in the delayed NNs in (2.1), then (2.1) turns into

    $ ˙x(t)=Ax(t)+W0f(x(t))+W1fgx(th(t))),
    $
    (3.30)

    then (3.30) is the special case of the NNs as in (2.1) which has been done in [8,11,24,28]. Similarly, if we rearrange the matrices in the delayed NNs in (2.1) and set $ W = 1 $, it shows that it is the same delayed NNs proposed in [9,19,22].

    Remark 16. The time delay in this work is defined as a continuous function serving on to a given interval that the lower and upper bounds for the time-varying delay exist and the time delay function is not necessary to be differentiable. In some proposed researches, the time delay function needs to be differentiable which are reported in [2,3,4,5,6,8,9,10,11,12,13,15,16,17,19,20,22,23,24,28].

    In this section, we provide numerical examples with their simulations to demonstrate the effectiveness of our results.

    Example 17. Consider the neural networks (2.1) with parameters as follows:

    $ A=diag{7.3458,6.9987,5.5949},B=diag{0,0,0},C=diag{1,1,1},W=[13.60142.96160.69387.473621.68103.21000.72902.633420.1300].
    $

    The activation function satisfies Eq (2.3) with

    $ E1=E2=E=diag{0,0,0},D1=D2=D=diag{0.3680,0.1795,0.2876}.
    $

    By applying Matlab LMIs Toolbox to solve the LMIs in (3.4)-(3.8), we can conclude that the upper bound of $ h_{max} $ without nondifferentiable $ \mu $ of NNs in (2.1) which is shown in Table 1 is to compare the results of this paper with the proposed results in [1,2,3,4,5,6,7,15,20]. The upper bounds received in this work are larger than the corresponding ones. Note that the symbol ‘-’ represents the upper bounds which are not provided in those literatures and this paper.

    Table 1.  Upper bounds of time delay $ h $ for various values of $ \mu $.
    $ h_{max} $ Method $ \mu=0.1 $ $ \mu=0.3 $ $ \mu=0.5 $ $ \mu=0.9 $ unknown $ \mu $
    0.1 [1] 0.8411 0.5496 0.4267 0.3227 -
    [2] 0.9282 0.5891 - 0.3399 -
    [3] 0.9985 0.6062 - 0.3905 -
    [4] 1.1243 0.6768 0.5168 0.4487 -
    [5] 1.1278 0.6860 0.5325 0.4602 -
    Thm 1 [6] 1.2080 0.6744 0.5149 0.4482 -
    Prop. 2 [6] 1.2198 0.6771 0.5218 0.4601 -
    Thm 2 [6] 1.3282 0.7547 0.6341 0.5245 -
    [15] 0.9291 0.5916 - 0.3413 0.3413
    [20] 1.1732 0.6848 - 0.4526 0.4526
    This paper - - - - 2.4989
    0.5 [2] 1.0497 0.6021 - 0.6021 -
    [7] 1.1313 0.6509 - - -
    [4] 1.1366 0.6896 0.6243 0.6186 -
    [5] 1.1423 0.7206 0.6382 0.6219 -
    Thm 1 [6] 0.2106 0.6727 0.5657 0.4360 -
    Prop. 2 [6] 1.2327 0.6807 0.5766 0.4864 -
    Thm 2 [6] 1.3417 0.7744 0.6635 0.6221 -
    [15] 1.0521 0.6053 - 0.6053 0.6053
    [20] 1.3046 0.7738 - 0.7704 0.7704
    This paper - - - - 2.4997

     | Show Table
    DownLoad: CSV

    The numerical simulation of finite-time stability for delayed neural network (2.1) with time-varying delay $ h(t) = 0.6+0.5\vert sint \vert, $ the initial condition $ \phi (t) = [-0.8, -0.3, 0.8], $ we have $ x^{T}(0)Mx(0) = 1.37, $ where $ M = I $ then we choose $ k_{1} = 1.4 $ and activation function $ g(x(t)) = tanh(x(t)). $ The trajectories of $ x_1(t), x_2(t) $ and $ x_3(t) $ of finite-time stability for this network is shown in Figure 1. Otherwise, Figure 2 shows the trajectories of $ x^{T}(t)x(t) $ of finite-time stability for delayed neural network (2.1) with $ k_{2} = 1.575. $

    Figure 1.  The trajectories of $ x_1(t), x_2(t) $ and $ x_3(t) $ of finite-time stability for delayed neural network of Example 17.
    Figure 2.  The trajectories of $ x^{T}(t)x(t) $ of finite-time stability for delayed neural network (2.1) with $ k_{2} = 1.575 $ of Example 17.

    Example 18. Consider the neural networks (2.1) with parameters as follows:

    $ A=diag{7.0214,7.4367},B=diag{0,0},C=diag{1,1},W=[6.499312.02750.68675.6614],
    $

    The activation function satisfies Eq (2.3) with

    $ E1=E2=E=diag{0,0},D1=D2=D=diag{1,1}.
    $

    As shown in Table 2, the results of the obtained as in [2,3,5,6,20] and this work, by using Matlab LMIs Toolbox, we can summarize that the upper bound of $ h_{max} $ is differentiable $ \mu $ of NNs in (2.1). We can see that the upper bounds received in this paper are larger than the corresponding purposed. Similarly, the symbol ‘-’ represents the upper bounds which are not given in those proposed and this study.

    Table 2.  Upper bounds of time delay $ h $ for various values of $ \mu $.
    $ h_{max} $ Method $ \mu=0.3 $ $ \mu=0.5 $ $ \mu=0.9 $ unknown $ \mu $
    0.1 [2] 0.4249 0.3014 0.2857 -
    [3] 0.4764 0.3635 0.3255 -
    [5] 0.5849 0.4433 0.3820 -
    Thm 1 [6] 0.5756 0.4312 0.3707 -
    Prop. 2 [6] 0.5783 0.4385 0.3860 -
    Thm 2 [6] 0.6444 0.5329 0.4383 -
    [20] 0.5123 0.4978 0.4625 0.4625
    This paper - - - 0.8999
    0.5 [2] 0.5147 0.4134 0.4134 -
    [3] 0.5335 0.4229 0.4228 -
    [5] 0.5992 0.4796 0.4373 -
    Thm 1 [6] 0.5760 0.4418 0.3922 -
    Prop. 2 [6] 0.5799 0.4583 0.4085 -
    Thm 2 [6] 0.6511 0.5408 0.4535 -
    [20] 0.6356 0.6356 0.6356 0.6356
    This paper - - - 0.8999

     | Show Table
    DownLoad: CSV

    The numerical simulation of finite-time stability for delayed neural network (2.1) with time-varying delay $ h(t) = 0.6+0.5\vert sint \vert, $ the initial condition $ \phi (t) = [-0.4, 0.5], $ we have $ x^{T}(0)Mx(0) = 0.41, $ where $ M = I $ then we choose $ k_{1} = 0.5 $ and activation function $ g(x(t)) = tanh(x(t)). $ The trajectories of $ x_1(t) $ and $ x_2(t) $ of finite-time stability for this network is shown in Figure 3. Otherwise, Figure 4 shows the trajectories of $ x^{T}(t)x(t) $ of finite-time stability for delayed neural network (2.1) with $ k_{2} = 0.85. $

    Figure 3.  The trajectories of $ x_1(t) $ and $ x_2(t) $ of finite-time stability for delayed neural network of Example 18.
    Figure 4.  The trajectories of $ x^{T}(t)x(t) $ of finite-time stability for delayed neural network (2.1) with $ k_{2} = 0.85 $ of Example 18.

    Example 19. Consider the neural networks (2.1) with parameters as follows:

    $ A=[1.71.701.310.70.710.6],B=[1.51.70.11.310.50.710.6],C=[0.50.70.10.30.10.50.70.50.6],W=I,
    $

    and the activation function $ f(x(t)) = g(x(t)) = tanh(x(t)) $, the time-varying delay function satisfying $ h(t) = 0.6+0.5\vert sint \vert. $ With an initial condition $ \phi (t) = [0.4, 0.2, 0.4], $ the solution of the neural networks is shown in Figure 5. We can see that the trajectory of $ x^T(t)Mx(t) = \| x(t)\|^2 $ diverges as $ t\rightarrow \infty $ is shown in Figure 6. To further investigate the maximum value of $ T_f $ that the finite-time stability of the neural networks (2.1) with respect to $ (0.6, k_2, T_f, 0.6, 1.1, I) $. For fixed $ k_2 = 500 $, by solving the LMIs in Theorem 0 and Corollary 11, we have the maximum value of $ T_f = 8.395 $.

    Figure 5.  The trajectories of $ x_1(t) $, $ x_2(t) $ and $ x_3(t) $ of finite-time stability for delayed neural network of Example 19.
    Figure 6.  The trajectories of $ x^{T}(t)x(t) $ of finite-time stability for delayed neural network (2.1) with $ k_{2} = 500 $ and $ T_f = 8.395 $ of Example 19.

    In this research, the finite-time stability criterion for neural networks with time-varying delays were proposed via a new argument based on the Lyapunov-Krasovskii functional (LKF) method was proposed with non-differentiable time-varying delay. The new LKF was improved by including triple integral terms consisting of improved functionality of finite-time stability, including integral inequality and implementing a positive diagonal matrix without a free weighting matrix. The improved finite-time sufficient conditions for the neural network with time varying delay were estimated in terms of linear matrix inequalities (LMIs) and the results were better than reported in previous research.

    The first author was supported by Faculty of Science and Engineering, and Research and Academic Service Division, Kasetsart University, Chalermprakiat Sakon Nakhon province campus. The second author was financially supported by the Thailand Research Fund (TRF), the Office of the Higher Education Commission (OHEC) (grant number : MRG6280149) and Khon Kaen University.

    The authors declare that there is no conflict of interests regarding the publication of this paper.

    [1] Sailaukhanuly Y, Zhakupbekova A, Amutova F, et al. (2013) On the Ranking of Chemicals based on their PBT Characteristics: Comparison of Different Ranking Methodologies Using Selected POPs as an Illustrative Example. Chemosphere 90: 112-117. doi: 10.1016/j.chemosphere.2012.08.015
    [2] Steinberg ChEW, Haitzer M, Geyer HJ (2001) Biokonzentrations studien in Abhängigkeit von Huminstoffen, In: Streß in limnischen Ökosystemen. Neu Ansätze in der ökotoxikologischen Bewartung von Binnengewässern, Steinberg, Ch.E.W., Brüggemenn, R., Kümmerer-Liess, K., Pflugmacher, St and Zauge, G.-P. (eds), Parey Buchverlag, Berlin, 2001, section 2.2, pp. 16-25.
    [3] Beat the microbead (2014) Microplastics: scientific evidence, http://beatthemicrobead.org/en/science (accessed Nov, 2014).
    [4] Syberg K (2009) Mixture toxicity and European chemical regulation: a interdisciplinary study, PhD thesis, Roskilde University, 88pp.
    [5] Stockholm Convention (2008) http://chm.pops.int/Home/tabid/2121/language/en-GB/Default.aspx (accessed Nov. 2014).
    [6] Mallouk TE, Yang P (2009) Chemistry at the Nano-Bio Interface. J Am Chem Soc 131: 7937-7939. doi: 10.1021/ja9038104
    [7] EPA (2013) Estimation Program Interface (EPI) Suite, ver. 4.11 http://www.epa.gov/oppt/exposure/pubs/episuite.htm (accessed Nov. 2014).
    [8] Walker JD, Carlsen L (2002) QSARs for identifying and prioritizing substances with persistence and bioaccumulation potential, SAR QSAR. Environ Res 13: 713-726.
    [9] Koch V (2008) PBT Assessment & Category Approach, http://reach.setac.eu/embed/presentations_reach/18_KOCH_PBT_Assessment.pdf (accessed Nov. 2014).
    [10] Bruggemann R, Carlsen L (2012) Multi-criteria decision analyses. Viewing MCDA in terms of both process and aggregation methods: Some thoughts, motivated by the paper of Huang, Keisler, and Linkov. Sci Tot Environ 425: 293-295.
    [11] Bruggemann R, Carlsen L (2006) Partial Order in Environmental Sciences and Chemistry. Springer, Berlin.
    [12] Bruggemann R, Patil GP (2011) Ranking and prioritization for multi-indicator systems—Introduction to partial order applications. Springer, New York.
    [13] Bruggemann R, Annoni P (2014) Average heights in partially ordered sets. MATCH Commun Math Comput Chem 71: 117-142.
    [14] De Loof K, De Meyer H, De Baets B (2006) Exploiting the Lattice of Ideals Representation of a Poset. Fundamenta Informaticae 71: 309-321.
    [15] Morton J, Pachter L, Shiu A, et al. (2009) Convex Rank Tests and Semigraphoids. SIAM J Discrete Math 23: 1117-1134. doi: 10.1137/080715822
    [16] Wienand O (2006) lcell, http://bio.math.berkeley.edu/ranktests/lcell/ (accessed Nov 2014).
    [17] Bruggemann R, Carlsen L (2011) An Improved Estimation of Averaged Ranks of Partially Orders. MATCH Commun Math Comput Chem 65: 383-414.
    [18] Bruggemann R, Carlsen L, Voigt K, et al. (2014) PyHasse Software for Partial Order Analysis. In R. Bruggemann, L. Carlsen, J. Wittmann (Eds.), Multi-Indicator Systems and Modelling in Partial Order (pp. 389–423). Springer, New York.
    [19] EC (2006) Regulation (EC) No 1907/2006 of the European Parliament and of the Council of 18 December 2006 concerning the Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH), establishing a European Chemicals Agency, amending Directive 1999/45/EC and repealing Council Regulation (EEC) No 793/93 and Commission Regulation (EC) No 1488/94 as well as Council Directive 76/769/EEC and Commission Directives 91/155/EEC, 93/67/EEC, 93/105/EC and 2000/21/EC; Article 57d,e (http://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1415770830954&uri=CELEX:32006R1907); Accessed Nov. 2014)
    [20] Munda G (2008) Social Multi-Criteria Evaluation for a Sustainable Economy. Springer-Verlag, Berlin.
  • This article has been cited by:

    1. Mengying Ding, Yali Dong, Robust Finite-time Boundedness of Discrete-time Neural Networks with Time-varying Delays, 2021, 17, 2224-3402, 146, 10.37394/23209.2020.17.18
    2. Nguyen T. Thanh, P. Niamsup, Vu N. Phat, New results on finite-time stability of fractional-order neural networks with time-varying delay, 2021, 33, 0941-0643, 17489, 10.1007/s00521-021-06339-2
    3. Mengqin Li, Minghui Jiang, Fengmin Ren, Yadan Zhang, Finite-time stability of a class of nonautonomous systems, 2022, 95, 0020-7179, 2771, 10.1080/00207179.2021.1934735
    4. Chantapish Zamart, Thongchai Botmart, Wajaree Weera, Suphachai Charoensin, New delay-dependent conditions for finite-time extended dissipativity based non-fragile feedback control for neural networks with mixed interval time-varying delays, 2022, 201, 03784754, 684, 10.1016/j.matcom.2021.07.007
    5. Chantapish Zamart, Thongchai Botmart, Wajaree Weera, Prem Junsawang, Finite-time decentralized event-triggered feedback control for generalized neural networks with mixed interval time-varying delays and cyber-attacks, 2023, 8, 2473-6988, 22274, 10.3934/math.20231136
    6. Zhiguang Liu, Xiangyu Xu, Tiejun Zhou, Finite-time multistability of a multidirectional associative memory neural network with multiple fractional orders based on a generalized Gronwall inequality, 2024, 36, 0941-0643, 13527, 10.1007/s00521-024-09736-5
    7. Chantapish Zamart, Thongchai Botmart, Further improvement of finite-time boundedness based nonfragile state feedback control for generalized neural networks with mixed interval time-varying delays via a new integral inequality, 2023, 2023, 1029-242X, 10.1186/s13660-023-02973-7
    8. C. Maharajan, C. Sowmiya, Exponential stability of delay dependent neutral-type descriptor neural networks with uncertain parameters, 2023, 5, 27731863, 100042, 10.1016/j.fraope.2023.100042
  • Reader Comments
  • © 2015 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(5282) PDF downloads(1012) Cited by(0)

Figures and Tables

Figures(6)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog