Research article

New results on finite-/fixed-time synchronization of delayed memristive neural networks with diffusion effects

  • Received: 07 February 2022 Revised: 24 June 2022 Accepted: 03 July 2022 Published: 19 July 2022
  • MSC : 34K39, 93D05

  • In this paper, we further investigate the finite-/fixed-time synchronization (FFTS) problem for a class of delayed memristive reaction-diffusion neural networks (MRDNNs). By utilizing the state-feedback control techniques, and constructing a general Lyapunov functional, with the help of inequality techniques and the finite-time stability theory, novel criteria are established to realize the FFTS of the considered delayed MRDNNs, which generalize and complement previously known results. Finally, a numerical example is provided to support the obtained theoretical results.

    Citation: Yinjie Qian, Lian Duan, Hui Wei. New results on finite-/fixed-time synchronization of delayed memristive neural networks with diffusion effects[J]. AIMS Mathematics, 2022, 7(9): 16962-16974. doi: 10.3934/math.2022931

    Related Papers:

    [1] Meher Langote, Saniya Saratkar, Praveen Kumar, Prateek Verma, Chetan Puri, Swapnil Gundewar, Palash Gourshettiwar . Human–computer interaction in healthcare: Comprehensive review. AIMS Bioengineering, 2024, 11(3): 343-390. doi: 10.3934/bioeng.2024018
    [2] Kuna Dhananjay Rao, Mudunuru Satya Dev Kumar, Paidi Pavani, Darapureddy Akshitha, Kagitha Nagamaleswara Rao, Hafiz Tayyab Rauf, Mohamed Sharaf . Cardiovascular disease prediction using hyperparameters-tuned LSTM considering COVID-19 with experimental validation. AIMS Bioengineering, 2023, 10(3): 265-282. doi: 10.3934/bioeng.2023017
    [3] Praveen Kumar, Sakshi V. Izankar, Induni N. Weerarathna, David Raymond, Prateek Verma . The evolving landscape: Role of artificial intelligence in cancer detection. AIMS Bioengineering, 2024, 11(2): 147-172. doi: 10.3934/bioeng.2024009
    [4] Shital Hajare, Rajendra Rewatkar, K.T.V. Reddy . Design of an iterative method for enhanced early prediction of acute coronary syndrome using XAI analysis. AIMS Bioengineering, 2024, 11(3): 301-322. doi: 10.3934/bioeng.2024016
    [5] Artur Luczak . How artificial intelligence reduces human bias in diagnostics?. AIMS Bioengineering, 2025, 12(1): 69-89. doi: 10.3934/bioeng.2025004
    [6] Eduardo Federighi Baisi Chagas, Piero Biteli, Bruno Moreira Candeloro, Miguel Angelo Rodrigues, Pedro Henrique Rodrigues . Physical exercise and COVID-19: a summary of the recommendations. AIMS Bioengineering, 2020, 7(4): 236-241. doi: 10.3934/bioeng.2020020
    [7] Norliyana Nor Hisham Shah, Rashid Jan, Hassan Ahmad, Normy Norfiza Abdul Razak, Imtiaz Ahmad, Hijaz Ahmad . Enhancing public health strategies for tungiasis: A mathematical approach with fractional derivative. AIMS Bioengineering, 2023, 10(4): 384-405. doi: 10.3934/bioeng.2023023
    [8] Maria Waqas, Urooj Ainuddin, Umar Iftikhar . An analog electronic circuit model for cAMP-dependent pathway—towards creation of Silicon life. AIMS Bioengineering, 2022, 9(2): 145-162. doi: 10.3934/bioeng.2022011
    [9] Daria Wehlage, Hannah Blattner, Al Mamun, Ines Kutzli, Elise Diestelhorst, Anke Rattenholl, Frank Gudermann, Dirk Lütkemeyer, Andrea Ehrmann . Cell growth on electrospun nanofiber mats from polyacrylonitrile (PAN) blends. AIMS Bioengineering, 2020, 7(1): 43-54. doi: 10.3934/bioeng.2020004
    [10] Leelakrishna Reddy, Segun Akinola . Transforming healthcare with the synergy of biotechnology and information technology. AIMS Bioengineering, 2023, 10(4): 421-439. doi: 10.3934/bioeng.2023025
  • In this paper, we further investigate the finite-/fixed-time synchronization (FFTS) problem for a class of delayed memristive reaction-diffusion neural networks (MRDNNs). By utilizing the state-feedback control techniques, and constructing a general Lyapunov functional, with the help of inequality techniques and the finite-time stability theory, novel criteria are established to realize the FFTS of the considered delayed MRDNNs, which generalize and complement previously known results. Finally, a numerical example is provided to support the obtained theoretical results.



    The classical convexity and concavity of functions are two fundamental notions in mathematics, they have widely applications in many branches of mathematics and physics [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30]. The origin theory of convex functions is generally attributed to Jensen [31]. The well-known book [32] played an indispensable role in the the theory of convex functions.

    The significance of inequalities is increasing day by day in the real world because of their fertile applications in our life and used to solve many complex problems in all areas of science and technology [33,34,35,36,37,38,39,40]. Integral inequalities have numerous applications in number theory, combinatorics, orthogonal polynomials, hypergeometric functions, quantum theory, linear programming, optimization theory, mechanics and in the theory of relativity [41,42,43,44,45,46,47,48]. This subject has received considerable attention from researchers [49,50,51,52,53,54] and hence it is assumed as an incorporative subject between mathematics, statistics, economics, and physics [55,56,57,58,59,60].

    One of the most well known and considerably used inequalities for convex function is the Hermite-Hadamard inequality, which can be stated as follows.

    Let IR be an interval, Y:IR be a convex function. Then the double inequality

    Y(ρ1+ρ22)1ρ2ρ1ρ2ρ1Y(ϱ)dϱY(ρ1)+Y(ρ2)2 (1.1)

    holds for all ρ1,ρ2I with ρ1ρ2. If Y is concave on the interval I, then the reversed inequality (1.1) holds.

    The Hermite-Hadamard inequality (1.1) has wide applications in the study of functional analysis (geometry of Banach spaces) and in the field of non-linear analysis [61]. Interestingly, both sides of the above integral inequality (1.1) can characterize the convex functions.

    Closely related to the convex (concave) functions, we have the concept of exponentially convex (concave) functions. The exponentially convex (concave) functions can be considered as a noteworthy extension of the convex functions and have potential applications in information theory, big data analysis, machine learning, and statistics [62,63]. Bernstein [64] and Antczak [65] introduced these exponentially convex functions implicitly and discuss their role in mathematical programming. Dragomir and Gomm [66] and Rashid et al. [67] established novel outcomes for these exponentially convex functions.

    Now we recall the concept of exponentially convex functions, which is mainly due to Awan et al. [68].

    Definition 1.1. ([68]) Let θR. Then a real-valued function Y:[0,)R is said to be θ-exponentially convex if

    Y(τρ1+(1τ)ρ2)τeθρ1Y(ρ1)+(1τ)eθρ2Y(ρ2) (1.2)

    for all ρ1,ρ2[0,) and τ[0,1]. Inequality (1.2) will hold in the reverse direction if Y is concave.

    For example, the mapping Y:RR, defined by Y(υ)=υ2 is a concave function, thus this mapping is an exponentially convex for all θ>0. Exponentially convex functions are employed for statistical analysis, recurrent neural networks, and experimental designs. The exponentially convex functions are highly useful due to their dominant features.

    Recall the concept of exponentially quasi-convex function, introduced by Nie et al. [69].

    Definition 1.2. ([69]) Let θR. Then a mapping Y:[0,)RR is said to be θ-exponentially quasi-convex if

    Y(τρ1+(1τ)ρ2)max{eθρ1Y(ρ1),eθρ2Y(ρ2)}

    for all ρ1,ρ2[0,) and τ[0,1].

    Kirmaci [70], and Pearce and Pečarič [71] established the new inequalities involving the convex functions as follows.

    Theorem 1.3. ([70]) Let IR be an interval, ρ1,ρ1I with ρ1<ρ2, and Y:IR be a differentiable mapping on I (where and in what follows I denotes the interior of I) such that YL([ρ1,ρ2]) and |Y| is convex on [ρ1,ρ2]. Then

    |Y(ρ1+ρ22)1ρ2ρ1ρ2ρ1Y(ϱ)dϱ|(ρ2ρ1)(|Y(ρ1)|+|Y(ρ2)|)8. (1.3)

    Theorem 1.4. ([71]) Let λR with λ0, IR be an interval, ρ1,ρ1I with ρ1<ρ2, and Y:IR be a differentiable mapping on I such that YL([ρ1,ρ2]) and |Y|λ is convex on [ρ1,ρ2]. Then

    |Y(ρ1+ρ22)1ρ2ρ1ρ2ρ1Y(ϱ)dϱ|(ρ2ρ1)4[|Y(ρ1)|λ+|Y(ρ2)|2]1λ. (1.4)

    The principal objective of this work is to determine the novel generalizations for weighted variants of (1.3) and (1.4) associated with the class of functions whose derivatives in absolute value at certain powers are exponentially convex with the aid of the auxiliary result. Moreover, an analogous improvement is developed for exponentially quasi-convex functions. Utilizing the obtained consequences, some new bounds for the weighted mean formula, rth moments of a continuous random variable and special bivariate means are established. The repercussions of the Hermite-Hadamard inequalities have depicted the presentations for various existing outcomes. Results obtained by the application of the technique disclose that the suggested scheme is very accurate, flexible, effective and simple to use.

    In what follows we use the notations

    L(ρ1,ρ2,τ)=n+τn+1ρ1+1τn+1ρ2

    and

    M(ρ1,ρ2,τ)=1τn+1ρ1+n+τn+1ρ2

    for τ[0,1] and all nN.

    From now onwards, let ρ1,ρ2R with ρ1<ρ2 and I=[ρ1,ρ2], unless otherwise specified. The following lemma presented as an auxiliary result which will be helpful for deriving several new results.

    Lemma 2.1. Let nN, Y:IR be a differentiable mapping on I such that YL1([ρ1,ρ2]), and U:[ρ1,ρ2][0,) be differentiable mapping. Then one has

    12[U(ρ1)[Y(ρ1)+Y(ρ2)]{U(nρ1+ρ2n+1)U(ρ1+nρ2n+1)+U(ρ2)}Y(nρ1+ρ2n+1)
    {U(nρ1+ρ2n+1)U(ρ1+nρ2n+1)+U(ρ2)}Y(nρ1+ρ2n+1)]+ρ2ρ12(n+1)10{[Y(n+τn+1ρ1
    1τn+1ρ2)+Y(1τn+1ρ1+n+τn+1ρ2)][U(n+τn+1ρ1+1τn+1ρ2)+U(1τn+1ρ1+n+τn+1ρ2)]}dτ
    =ρ2ρ12(n+1){10[U(n+τn+1ρ1+1τn+1ρ2)U(1τn+1ρ1+n+τn+1ρ2)+U(ρ2)]
    ×[Y(n+τn+1ρ1+1τn+1ρ2)+Y(1τn+1ρ1+n+τn+1ρ2)]dτ}. (2.1)

    Proof. It follows from integration by parts that

    I1=10[U(n+τn+1ρ1+1τn+1ρ2)U(1τn+1ρ1+n+τn+1ρ2)+U(ρ2)]Y(n+τn+1ρ1+1τn+1ρ2)dτ
    =n+1ρ2ρ1{U(n+τn+1ρ1+1τn+1ρ2)U(1τn+1ρ1+n+τn+1ρ2)+U(ρ2)}Y(n+τn+1ρ1+1τn+1ρ2)|10
    ρ1ρ2n+110Y(n+τn+1ρ1+1τn+1ρ2)[U(n+τn+1ρ1+1τn+1ρ2)+U(1τn+1ρ1+n+τn+1ρ2)]dτ
    =n+1ρ2ρ1[U(ρ1)Y(ρ1)[U(nρ1+ρ2n+1)U(ρ1+nρ2n+1)+U(ρ2)]]Y(nρ1+ρ2n+1)
    +10Y(n+τn+1ρ1+1τn+1ρ2)[U(n+τn+1ρ1+1τn+1ρ2)+U(1τn+1ρ1+n+τn+1ρ2)]dτ.

    Similarly, we have

    I2=10[U(n+τn+1ρ1+1τn+1ρ2)U(1τn+1ρ1+n+τn+1ρ2)+U(ρ2)]Y(1τn+1ρ1+n+τn+1ρ2)dτ
    =n+1ρ2ρ1[U(ρ1)Y(ρ1)[U(nρ1+ρ2n+1)U(ρ1+nρ2n+1)+U(ρ2)]]Y(nρ1+ρ2n+1)
    +10Y(1τn+1ρ1+n+τn+1ρ2)[U(n+τn+1ρ1+1τn+1ρ2)+U(1τn+1ρ1+n+τn+1ρ2)]dτ.

    Adding I1 and I2, then multiplying by ρ2ρ12(n+1) we get the desired identity (2.1).

    Theorem 2.2. Let nN, θR, Y:IR be a differentiable mapping on I such that |Y| is θ-exponentially convex on I, and V:I[0,) be a continuous and positive mapping such it is symmetric with respect to nρ1+ρ2n+1. Then

    |ρ2ρ1Y(ϱ)V(ϱ)dϱY(nρ1+ρ2n+1)ρ2ρ1V(ϱ)dϱ|
    ρ2ρ1n+1[|eθρ1Y(ρ1)|+|eθρ2Y(ρ2)|]10L(ρ1,ρ2,τ)ρ1V(ϱ)dϱdτ. (2.2)

    Proof. Let τ[ρ1,ρ2] and Y(τ)=τρ1V(ϱ)dϱ. Then it follows from Lemma 2.1 that

    ρ2ρ12(n+1)10[Y(n+τn+1ρ1+1τn+1ρ2)+Y(1τn+1ρ1+n+τn+1ρ2)][V(n+τn+1ρ1+1τn+1ρ2)
    +V(1τn+1ρ1+n+τn+1ρ2)]dτY(nρ1+ρ2n+1)ρ2ρ1V(ϱ)dϱ
    =ρ2ρ12(n+1)10{L(ρ1,ρ2,τ)ρ1V(ϱ)dϱ+ρ2M(ρ1,ρ2,τ)V(ϱ)dϱ}
    ×[Y(n+τn+1ρ1+1τn+1ρ2)+Y(1τn+1ρ1+n+τn+1ρ2)]dτ. (2.3)

    Since V(ϱ) is symmetric with respect to ϱ=nρ1+ρ2n+1, we have

    ρ2ρ12(n+1)10[Y(n+τn+1ρ1+1τn+1ρ2)+Y(1τn+1ρ1+n+τn+1ρ2)][V(n+τn+1ρ1+1τn+1ρ2)
    +V(1τn+1ρ1+n+τn+1ρ2)]dτ
    =ρ2ρ1(n+1)10Y(n+τn+1ρ1+1τn+1ρ2)V(n+τn+1ρ1+1τn+1ρ2)dτ
    +ρ2ρ1(n+1)10Y(1τn+1ρ1+n+τn+1ρ2)V(1τn+1ρ1+n+τn+1ρ2)dτ
    =nρ1+ρ2n+1ρ1Y(ϱ)V(ϱ)dϱ+ρ2ρ1+nρ2n+1Y(ϱ)V(ϱ)dϱ=ρ2ρ1Y(ϱ)V(ϱ)dϱ (2.4)

    and

    L(ρ1,ρ2,τ)ρ1V(ϱ)dϱ=ρ2M(ρ1,ρ2,τ)V(ϱ)dϱτ[0,1]. (2.5)

    From (2.3)–(2.5) we clearly see that

    |ρ2ρ1Y(ϱ)V(ϱ)dϱY(nρ1+ρ2n+1)ρ2ρ1V(ϱ)dϱ|
    ρ2ρ1n+1{10L(ρ1,ρ2,τ)ρ1|Y(n+τn+1ρ1+1τn+1ρ2)|dτ+10L(ρ1,ρ2,τ)ρ1|Y(1τn+1ρ1+n+τn+1ρ2)|dτ}. (2.6)

    Making use of the exponentially convexity of |Y| we get

    10L(ρ1,ρ2,τ)ρ1V(ϱ)|Y(n+τn+1ρ1+1τn+1ρ2)|dϱdτ+10L(ρ1,ρ2,τ)ρ1V(ϱ)|Y(1τn+1ρ1+n+τn+1ρ2)|dϱdτ
    10L(ρ1,ρ2,τ)ρ1V(ϱ)[n+τn+1|eθρ1Y(ρ1)|+1τn+1|eθρ2Y(ρ2)|+1τn+1|eθρ1Y(ρ1)+n+τn+1|eθρ2Y(ρ2)||]dϱdτ
    =[|eθρ1Y(ρ1)|+|eθρ2Y(ρ2)|]10L(ρ1,ρ2,τ)ρ1V(ϱ)dϱdτ. (2.7)

    Therefore, inequality (2.2) follows from (2.6) and (2.7).

    Corollary 2.1. Let θ=0. Then Theorem 2.2 leads to

    |ρ2ρ1Y(ϱ)V(ϱ)dϱY(nρ1+ρ2n+1)ρ2ρ1V(ϱ)dϱ|
    ρ2ρ1n+1[|Y(ρ1)|+|Y(ρ2)|]10L(ρ1,ρ2,τ)ρ1V(ϱ)dϱdτ.

    Corollary 2.2. Let n=1. Then Theorem 2.2 reduces to

    |ρ2ρ1Y(ϱ)V(ϱ)dϱY(ρ1+ρ22)ρ2ρ1V(ϱ)dϱ|
    ρ2ρ12[|eθρ1Y(ρ1)|+|eθρ2Y(ρ2)|]10L(ρ1,ρ2,τ)ρ1V(ϱ)dϱdτ.

    Corollary 2.3. Let V(ϱ)=1. Then then Theorem 2.3 becomes

    |Y(nρ1+ρ2n+1)1ρ2ρ1ρ2ρ1Y(ϱ)dϱ|
    ρ2ρ12(n+1)2[|eθρ1Y(ρ1)|+|eθρ2Y(ρ2)|].

    Remark 2.1. Theorem 2.2 leads to the conclusion that

    (1) If n=1 and θ=0, then we get Theorem 2.2 of [72].

    (2) If n=V(ϱ)=1 and θ=0, then we obtain inequality (1.2) of [70]

    Theorem 2.3. Taking into consideration the hypothesis of Theorem 2.2 and λ1. If θR and |Y|λ is θ-exponentially convex on I, then

    |ρ2ρ1Y(ϱ)V(ϱ)dϱY(nρ1+ρ2n+1)ρ2ρ1V(ϱ)dϱ|
    2(ρ2ρ1)n+1[|eθρ1Y(ρ1)|λ+|eθρ2Y(ρ2)|λ2]1λ10L(ρ1,ρ2,τ)ρ1V(ϱ)dϱdτ (2.8)

    for all nN.

    Proof. Continuing inequality (2.6) in the proofs of Theorem 2.2 and using the well-known Hölder integral inequality, one has

    |ρ2ρ1Y(ϱ)V(ϱ)dϱY(nρ1+ρ2n+1)ρ2ρ1V(ϱ)dϱ|
    ρ2ρ1n+1{(10L(ρ1,ρ2,τ)ρ1V(ϱ)dϱdτ)11λ(10L(ρ1,ρ2,τ)ρ1V(ϱ)|Y(n+τn+1ρ1+1τn+1ρ2)|λdϱdτ)1λ
    +(10L(ρ1,ρ2,τ)ρ1V(ϱ)dϱdτ)11λ(10L(ρ1,ρ2,τ)ρ1V(ϱ)|Y(1τn+1ρ1+n+τn+1ρ2)|λdϱdτ)1λ}
    ρ2ρ1n+1(10L(ρ1,ρ2,τ)ρ1V(ϱ)dϱdτ)11λ{(10L(ρ1,ρ2,τ)ρ1V(ϱ)|Y(n+τn+1ρ1+1τn+1ρ2)|λdϱdτ)1λ
    +(10L(ρ1,ρ2,τ)ρ1V(ϱ)|Y(1τn+1ρ1+n+τn+1ρ2)|λdϱdτ)1λ}. (2.9)

    It follows from the power-mean inequality

    μa+νa<21a(μ+ν)a

    for μ,ν>0 and a<1 that

    (10L(ρ1,ρ2,τ)ρ1V(ϱ)|Y(n+τn+1ρ1+1τn+1ρ2)|λdϱdτ)1λ (2.10)
    +(10L(ρ1,ρ2,τ)ρ1V(ϱ)|Y(1τn+1ρ1+n+τn+1ρ2)|λdϱdτ)1λ
    211λ{10L(ρ1,ρ2,τ)ρ1V(ϱ)(|Y(n+τn+1ρ1+1τn+1ρ2)|λ+|Y(1τn+1ρ1+n+τn+1ρ2)|λ)dϱdτ}1λ.

    Since |Y|λ is an θ-exponentially convex on I, we have

    |Y(n+τn+1ρ1+1τn+1ρ2)|λ+|Y(1τn+1ρ1+n+τn+1ρ2)|
    n+τn+1|eθρ1Y(ρ1)|q+1τn+1|eθρ2Y(ρ2)|q+1τn+1|eθρ1Y(ρ1)|q+n+τn+1|eθρ2Y(ρ2)|q
    =|eθρ1Y(ρ1)|q+|eθρ2Y(ρ2)|q. (2.11)

    Combining (2.9)–(2.11) gives the required inequality (2.8).

    Corollary 2.4. Let n=1. Then Theorem 2.3 reduces to

    |ρ2ρ1Y(ϱ)V(ϱ)dϱY(ρ1+ρ22)ρ2ρ1V(ϱ)dϱ|
    (ρ2ρ1)[|eθρ1Y(ρ1)|λ+|eθρ2Y(ρ2)|λ2]1λ10L(ρ1,ρ2,τ)ρ1V(ϱ)dϱdτ.

    Corollary 2.5. Let θ=0. Then Theorem 2.3 leads to

    |ρ2ρ1Y(x)V(x)dxY(nρ1+ρ2n+1)ρ2ρ1V(ϱ)dϱ|
    2(ρ2ρ1)n+1[|Y(ρ1)|λ+|Y(ρ2)|λ2]1λ10L(ρ1,ρ2,τ)ρ1V(ϱ)dϱdτ.

    Corollary 2.6. Let V(ϱ)=1. Then Theorem 2.3 becomes

    |Y(nρ1+ρ2n+1)1ρ2ρ1ρ2ρ1Y(ϱ)dϱ|(ρ2ρ1)2(n+1)[|Y(ρ1)|λ+|Y(ρ2)|λ2]1λ.

    Remark 2.2. From Theorem 2.3 we clearly see that

    (1) If n=1 and θ=0, then we get Theorem 2.4 in [72].

    (2) If V(ϱ)=n=1 and θ=0, then we get inequality (1.3) in [71].

    In the following result, the exponentially convex functions in Theorem 2.3 can be extended to exponentially quasi-convex functions.

    Theorem 2.4. Using the hypothesis of Theorem 2.2. If |Y| is θ-exponentially quasi-convex on I, then

    |ρ2ρ1Y(ϱ)V(ϱ)dϱY(nρ1+ρ2n+1)ρ2ρ1V(ϱ)dϱ| (2.12)
    (ρ2ρ1)n+1[max{|eθρ1Y(ρ1)|,|eθ(nρ1+ρ2n+1)Y(nρ1+ρ2n+1)|}
    +max{|eθρ2Y(ρ2)|,|eθ(ρ1+nρ2n+1)Y(ρ1+nρ2n+1)|}]10L(ρ1,ρ2,τ)ρ1V(ϱ)dϱdτ

    for all nN.

    Proof. Using the exponentially quasi-convexity of |Y| for (2.6) in the proofs of Theorem 2.2, we get

    |Y(n+τn+1ρ1+1τn+1ρ2)|=max{|eθρ1Y(ρ1)|,|eθ(nρ1+ρ2n+1)Y(nρ1+ρ2n+1)|} (2.13)

    and

    |Y(1τn+1ρ1+n+τn+1ρ2)|=max{|eθρ2Y(ρ2)|,|eθ(ρ1+nρ2n+1)Y(ρ1+nρ2n+1)|}. (2.14)

    Combining (2.6), (2.13) and (2.14), we get the desired inequality (2.12).

    Next, we discuss some special cases of Theorem 2.4 as follows.

    Corollary 2.7. Let n=1. Then Theorem 2.4 reduces to

    |ρ2ρ1Y(ϱ)V(ϱ)dϱY(ρ1+ρ22)ρ2ρ1V(ϱ)dϱ|
    (ρ2ρ1)2[max{|eθρ1Y(ρ1)|,|eθ(ρ1+ρ22)Y(ρ1+ρ22)|}
    +max{|eθρ2Y(ρ2)|,|eθ(ρ1+ρ22)Y(ρ1+ρ22)|}]10L(ρ1,ρ2,τ)ρ1V(ϱ)dϱdτ.

    Corollary 2.8. Let θ=0. Then Theorem 2.4 leads to

    |ρ2ρ1Y(ϱ)V(ϱ)dϱY(nρ1+ρ2n+1)ρ2ρ1V(ϱ)dϱ|
    (ρ2ρ1)n+1[max{|Y(ρ1)|,|Y(nρ1+ρ2n+1)|}
    +max{|Y(ρ2)|,|Y(ρ1+nρ2n+1)|}]10L(ρ1,ρ2,τ)ρ1V(ϱ)dϱdτ.

    Corollary 2.9. Let V(x)=1. Then Theorem 2.4 becomes

    |Y(nρ1+ρ2n+1)1ρ2ρ1ρ2ρ1Y(x)dx|
    (ρ2ρ1)2(n+1)[max{|Y(ρ1)|,|Y(nρ1+ρ2(n+1))|}
    +max{|Y(ρ2)|,|Y(ρ1+nρ2n+1)|}].

    Remark 2.3. If |Y| is increasing in Theorem 2.4, then

    |ρ2ρ1Y(ϱ)V(ϱ)dϱY(nρ1+ρ2n+1)ρ2ρ1V(ϱ)dϱ| (2.15)
    (ρ2ρ1)n+1[|eθρ2Y(ρ2)|+|eθ(ρ1+nρ2n+1)Y(ρ1+nρ2n+1)|]10L.(ρ1,ρ2,τ)ρ1V(ϱ)dϱdτ

    If |Y| is decreasing in Theorem 2.4, then

    |ρ2ρ1Y(ϱ)V(ϱ)dϱY(nρ1+ρ2n+1)ρ2ρ1V(ϱ)dϱ| (2.16)
    (ρ2ρ1)n+1[|eθρ1Y(ρ1)|+|eθ(nρ1+ρ2n+1)Y(nρ1+ρ2n+1)|]10L(ρ1,ρ2,τ)ρ1V(ϱ)dϱdτ.

    Remark 2.4. From Theorem 2.4 we clearly see that

    (1) Let n=1 and θ=0. Then Theorem 2.4 and Remark 2.3 lead to Theorem 2.8 and Remark 2.9 of [72], respectively.

    (2). Let n=V(ϱ)=1 and θ=0. Then we get Corollary 2.10 and Remark 2.11 of [72].

    Theorem 2.5. Suppose that all the hypothesis of Theorem 2.2 are satisfied, θR and λ1. If |Y|λ is θ-exponentially quasi-convex on I, then we have

    |ρ2ρ1Y(ϱ)V(ϱ)dϱY(nρ1+ρ2n+1)ρ2ρ1V(ϱ)dϱ| (2.17)
    (ρ2ρ1)n+1[(max{|eθρ1Y(ρ1)|λ,|eθ(nρ1+ρ2n+1)Y(nρ1+ρ2n+1)|λ})1λ
    +(max{|eθρ2Y(ρ2)|λ,|eθ(ρ1+nρ2n+1)Y(ρ1+nρ2n+1)|λ})1λ]10L(ρ1,ρ2,τ)ρ1V(ϱ)dϱdτ

    for all nN.

    Proof. It follows from the exponentially quasi-convexity of |Y|λ and (2.6) that

    |Y(n+τn+1ρ1+1τn+1ρ2)|λmax{|eθρ1Y(ρ1)|λ,|eθ(nρ1+ρ2n+1)Y(nρ1+ρ2n+1)|λ} (2.18)

    and

    |Y(1τn+1ρ1+n+τn+1ρ2)|λmax{|eθρ2Y(ρ2)|λ,|eθ(ρ1+nρ2n+1)Y(ρ1+nρ2n+1)|λ}. (2.19)

    A combination of (2.6), (2.18) and (2.19) lead to the required inequality (2.17).

    Corollary 2.10. Let n=1. Then Theorem 2.5 reduces to

    |ρ2ρ1Y(ϱ)V(ϱ)dϱY(ρ1+ρ22)ρ2ρ1V(ϱ)dϱ|
    (ρ2ρ1)2[(max{|eθρ1Y(ρ1)|λ,|eθ(ρ1+ρ22)Y(ρ1+ρ22)|λ})1λ
    +(max{|eθρ2Y(ρ2)|λ,|eθ(ρ1+1ρ22)Y(ρ1+ρ22)|λ})1λ]10L(ρ1,ρ2,τ)ρ1V(ϱ)dϱdτ.

    Corollary 2.11. If θ=0, then Theorem 2.5 leads to the conclusion that

    |ρ2ρ1Y(ϱ)V(ϱ)dϱY(nρ1+ρ2n+1)ρ2ρ1V(ϱ)dϱ|
    (ρ2ρ1)n+1[max{|Y(ρ1)|,|Y(nρ1+ρ2n+1)|}
    +max{|Y(ρ2)|,|Y(ρ1+nρ2n+1)|}]10L(ρ1,ρ2,τ)ρ1V(ϱ)dϱdτ.

    In this section, we support our main results by presenting two examples.

    Example 3.1. Let ρ1=0, ρ2=π, θ=2, n=1, Y(ϱ)=sinϱ and V(ϱ)=cosϱ. Then all the assumptions in Theorem 2.2 are satisfied. Note that

    |ρ2ρ1Y(ϱ)V(ϱ)dϱY(nρ1+ρ2n+1)ρ2ρ1V(ϱ)dϱ|
    =|π0sinϱcosϱdϱsinπ2π0cosϱdϱ|=1 (3.1)

    and

    ρ2ρ1n+1[|eθρ1Y(ρ1)|+|eθρ2Y(ρ2)|]10L(ρ1,ρ2,τ)ρ1V(ϱ)dϱdτ
    =π2[|e0cos0)|+|e2πcosπ|]10L(0,π,τ)0cosϱdϱdτ
    =536.50π210(1τ)π20cosϱdϱdτ536.5. (3.2)

    From (3.1) and (3.2) we clearly Example 3.1 supports the conclusion of Theorem 2.2.

    Example 3.2. Let ρ1=0, ρ2=2, θ=0.5, n=2, Y(ϱ)=ϱ+2 and V(ϱ)=ϱ. Then all the assumptions in Theorem 2.2 are satisfied. Note that

    |ρ2ρ1Y(ϱ)V(ϱ)dϱY(nρ1+ρ2n+1)ρ2ρ1V(ϱ)dϱ|
    =|20ϱϱ+2dϱ8320ϱdϱ|0.3758 (3.3)

    and

    ρ2ρ1n+1[|eθρ1Y(ρ1)|+|eθρ2Y(ρ2)|]10L(ρ1,ρ2,τ)ρ1V(ϱ)dϱdτ
    =23[|e0.5(0)122)|+|e0.5(2)14|]10L(0,2,τ)0ϱdϱdτ
    =0.6887102(1τ)30ϱdϱdτ1.0332. (3.4)

    From (3.3) and (3.4) we clearly see that Example 3.2 supports the conclusion of Theorem 2.2.

    Let Δ be a partition: ρ1=ϱ0<ϱ2<<ϱn1<ϱn=ρ2 of the interval [ρ1,ρ2] and consider the quadrature formula

    ρ2ρ1Y(ϱ)V(ϱ)dϱ=T(Y,V,p)+E(Y,V,p), (4.1)

    where

    T(Y,V,p)=κ1j=0Y(nϱj+ϱj+1n+1)ϱj+1ϱjV(ϱ)dϱ

    is weighted mean and E(Y,V,p) is the related approximation error.

    The aim of this subsection is to provide several new bounds for E(Y,V,p).

    Theorem 4.1. Let λ1, θR, and |Y|λ be θ-exponentially convex on I. Then the inequality

    |E(Y,V,p)|κ1j=0(ϱj+1ϱj)(|eθϱjY(ϱj)|λ+|eθϱj+1Y(ϱj+1)|λ2)1λ10L(ϱj,ϱj+1,τ)ϱjV(ϱ)dϱdτ

    holds for any pI if all the conditions of Theorem 2.2 are satisfied.

    Proof. Applying Theorem 2.3 to the interval [ϱj,ϱj+1] (j=0,1,...,κ1) of the partition Δ, we get

    |Y(nϱj+ϱj+1n+1)ϱj+1ϱjV(ϱ)dϱϱj+1ϱjY(ϱ)V(ϱ)dϱ|
    (ϱj+1ϱj)(|eθϱjY(ϱj)|λ+|eθϱj+1Y(ϱj+1)|λ2)1λ10L(ϱj,ϱj+1,τ)ϱjV(ϱ)dϱdτ.

    Summing the above inequality on j from 0 to κ1 and making use of the triangle inequality together with the exponential convexity of |Y|λ lead to

    |T(Y,V,p)ρ2ρ1Y(ϱ)V(ϱ)dϱ|
    κ1j=0(ϱj+1ϱj)(|eθϱjY(ϱj)|λ+|eθϱj+1Y(ϱj+1)|λ2)1λ10L(ϱj,ϱj+1,τ)ϱjV(ϱ)dϱdτ,

    this completes the proof of Theorem 4.1.

    Theorem 4.2. Let λ1, θR, and |Y|λ be θ-exponentially convex on I. Then the inequality

    |E(Y,V,p)|
    1n+1κ1j=0(ϱj+1ϱj)[[max{|eθϱjY(ϱj)|λ,|eθ(nϱj+ϱj+1n+1)Y(nϱj+ϱj+1n+1)|λ}]1λ
    +[max{|eθϱj+1Y(ϱj+1)|λ,|eθ(ϱj+nϱj+1n+1)τY(ϱj+nϱj+1n+1)|λ}]1λ]10L(ϱj,ϱj+1,τ)ϱjV(ϱ)dϱdτ

    holds for every partition Δ of I if all the hypothesis of Theorem 2.2 are satisfied.

    Proof. Making use of Theorem 2.5 on the interval [ϱj,ϱj+1] (j=0,1,,κ1) of the partition , we get

    |Y(nϱj+ϱj+1n+1)ϱj+1ϱjV(ϱ)dϱϱj+1ϱjY(ϱ)V(ϱ)dϱ|
    (ϱj+1ϱj)n+1[[max{|eθϱjY(ϱj)|λ,|eθ(nϱj+ϱj+1n+1)Y(nϱj+ϱj+1n+1)|λ}]1λ
    +[max{|eθϱj+1Y(ϱj+1)|λ,|eθ(ϱj+nϱj+1n+1)Y(ϱj+nϱj+1n+1)|λ}]1λ]10L(ϱj,ϱj+1,τ)ϱjV(ϱ)dϱdτ.

    Summing the above inequality on j from 0 to κ1 and making use the triangle inequality together with the exponential convexity of |Y|λ lead to the conclusion that

    |T(Y,V,p)ρ2ρ1Y(ϱ)V(ϱ)dϱ|
    1n+1κ1j=0(ϱj+1ϱj)[[max{|eθϱjY(ϱj)|λ,|eθ(nϱj+ϱj+1n+1)Y(nϱj+ϱj+1n+1)|λ}]1λ
    +[max{|eθϱj+1Y(ϱj+1)|λ,|eθ(ϱj+nϱj+1n+1)Y(ϱj+nϱj+1n+1)|λ}]1λ]10L(ϱj,ϱj+1,τ)ϱjV(ϱ)dϱdτ,

    this completes the proof of Theorem 4.2.

    Let 0<ρ1<ρ2, rR, V:[ρ1,ρ2][0,] be continuous on [ρ1,ρ2] and symmetric with respect to nρ1+ρ2n+1 and X be a continuous random variable having probability density function V. Then the rth-moment Er(X) of X is given by

    Er(X)=ρ2ρ1τrV(τ)dτ

    if it is finite.

    Theorem 4.3. The inequality

    |Er(X)(nρ1+ρ2n+1)r|r(ρ2ρ1)(n+1)2[|eθρ1ρr11|+|eθρ2ρr12|]

    holds for 0<ρ1<ρ2 and r2.

    Proof. Let Y(τ)=τr. Then |Y(τ)|=rτr1 is exponentially convex function. Note that

    ρ2ρ1Y(ϱ)V(ϱ)dϱ=Er(X),L(ρ1,ρ2,τ)ρ1V(ϱ)dϱnρ1+ρ2n+1ρ1V(ϱ)dϱ=1n+1(τ[0,1]),
    Y(nρ1+ρ2n+1)=(nρ1+ρ2n+1)r,|eθρ1Y(ρ1)|+|eθρ2Y(ρ2)|=r(eθρ1ρr11+eθρ2ρr12).

    Therefore, the desired result follows from inequality (2.2) immediately.

    Theorem 4.4. The inequality

    |Er(X)(nρ1+ρ2n+1)r|r(ρ2ρ1)(n+1)2[|eθρ2ρr12|+|eθ(nρ1+ρ2n+1)(nρ1+ρ2n+1)r1|]

    holds for 0<ρ1<ρ2 and r1.

    Proof. Let Y(τ)=τr. Then |Y(τ)|=rτr1 is increasing and exponentially quasi-convex, and the desired result can be obtained by use of inequality (2.15) and the similar arguments of Theorem 4.3.

    A real-valued function Ω:(0,)×(0,)(0,) is said to be a bivariate mean if min{ρ1,ρ2}Ω(ρ1,ρ2)max{ρ1,ρ2} for all ρ1,ρ2(0,). Recently, the properties and applications for the bivariate means and their related special functions have attracted the attention of many researchers [73,74,75,76,77,78,79,80,81,82,83,84,85,86]. In particular, many remarkable inequalities for the bivariate means can be found in the literature [87,88,89,90,91,92,93,94,95,96].

    In this subsection, we use the results obtained in Section 2 to give some applications to the special bivariate means.

    Let ρ1,ρ2>0 with ρ1ρ2. Then the arithmetic mean A(ρ1,ρ2), weighted arithmetic mean A(ρ1,ρ2;w1,w2) and n-th generalized logarithmic mean Ln(ρ1,ρ2) are defined by

    A(ρ1,ρ2)=ρ1+ρ12,A(ρ1,ρ2;w1,w2)=w1ρ1+w2ρ2w1+w2

    and

    Ln(ρ1,ρ2)=[ρn+12ρn+11(n+1)(ρ2ρ1)]1/n.

    Let ϱ>0, rN, Y(ϱ)=ϱr and V:[ρ1,ρ2]R+ be a differentiable mapping such that it is symmetric with respect to nρ1+ρ2n+1. Then Theorem 2.2 implies that

    |(nρ1+ρ2n+1)rρ2ρ1V(ϱ)dϱρ2ρ1ϱrV(ϱ)dϱ|r(ρ2ρ1)n+1[|eθρ1ρn11|+|eθρ2ρn12|]10L(ρ1,ρ2,τ)ρ1V(ϱ)dϱdτ,

    which can be rewritten as

    |(A(ρ1,ρ2;n,1))rρ2ρ1V(ϱ)dϱρ2ρ1ϱrV(ϱ)dϱ|
    2r(ρ2ρ1)n+1[A(|eθρ1ρn11|,|eθρ2ρn12|)]10L(ρ1,ρ2,τ)ρ1V(ϱ)dϱdτ. (4.2)

    Let V=1. Then inequality (4.2) leads to Corollary 4.1 immediately.

    Corollary 4.1. Let ρ2>ρ1>0, rN and r2. Then one has

    |(A(ρ1,ρ2;n,1))rLrr(ρ1,ρ2)|r(ρ2ρ1)2(n+1)2[A(|eθρ1ρn11|,|eθρ2ρn12|)].

    We conducted a preliminary attempt to develop a novel formulation presumably for new Hermite-Hadamard type for proposing two new classes of exponentially convex and exponentially quasi-convex functions and presented their analogues. An auxiliary result was chosen because of its success in leading to the well-known Hermite-Hadamard type inequalities. An intriguing feature of an auxiliary is that this simple formulation has significant importance while studying the error bounds of different numerical quadrature rules. Such a potential the connection needs further investigation. We conclude that the results derived in this paper are general in character and give some contributions to inequality theory and fractional calculus as an application for establishing the uniqueness of solutions in boundary value problems, fractional differential equations, and special relativity theory. This interesting aspect of time is worth further investigation. Finally, the innovative concept of exponentially convex functions has potential application in rth-moments and special bivariate mean to show the reported result. Our findings are the refinements and generalizations of the existing results that stimulate futuristic research.

    The authors would like to thank the anonymous referees for their valuable comments and suggestions, which led to considerable improvement of the article.

    The research is supported by the Natural Science Foundation of China (Grant Nos. Grant Nos. 11701176, 61673169, 11301127, 11626101, 11601485).

    The authors declare that they have no competing interests.



    [1] L. Chua, Memristor-the missing circuit element, IEEE Trans. Circuit Theory, 18 (1971), 507–519. https://doi.org/10.1109/TCT.1971.1083337 doi: 10.1109/TCT.1971.1083337
    [2] D. B. Strukov, G. S. Snider, D. R. Stewart, R. S. Williams, The missing memristor found, Nature, 453 (2008), 80–83. https://doi.org/10.1038/nature06932
    [3] S. Wen, T. Huang, Z. Zeng, Y. Chen, P. Li, Circuit design and exponential stabilization of memristive neural networks, Neural Networks, 63 (2015), 48–56. https://doi.org/10.1016/j.neunet.2014.10.011 doi: 10.1016/j.neunet.2014.10.011
    [4] Y. Zhao, S. Ren, J. Kurths, Finite-time and fixed-time synchronization for a class of memristor-based competitive neural networks with different time scales, Chaos Solitons Fract., 148 (2021), 111033. https://doi.org/10.1016/j.chaos.2021.111033 doi: 10.1016/j.chaos.2021.111033
    [5] H. Bao, J. Cao, J. Kurths, State estimation of fractional-order delayed memristive neural networks, Nonlinear Dyn., 94 (2018), 1215–1225. https://doi.org/10.1007/s11071-018-4419-3 doi: 10.1007/s11071-018-4419-3
    [6] L. Duan, L. Huang, Periodicity and dissipativity for memristor-based mixed time-varying delayed neural networks via differential inclusions, Neural Networks, 57 (2014), 12–22. https://doi.org/10.1016/j.neunet.2014.05.002 doi: 10.1016/j.neunet.2014.05.002
    [7] Z. Guo, S. Yang, J. Wang, Global exponential synchronization of multiple memristive neural networks with time delay via nonlinear coupling, IEEE Trans. Neural Networks Learn. Syst., 26 (2014), 1300–1311. https://doi.org/10.1109/TNNLS.2014.2354432 doi: 10.1109/TNNLS.2014.2354432
    [8] Y. Huang, F. Wu, Finite-time passivity and synchronization of coupled complex-valued memristive neural networks, Inf. Sci., 580 (2021), 775–880. https://doi.org/10.1016/j.ins.2021.09.050 doi: 10.1016/j.ins.2021.09.050
    [9] Y. Huang, S. Qiu, S. Ren, Finite-time synchronisation and passivity of coupled memristive neural networks, Int. J. Control, 93 (2020), 2824–2837. https://doi.org/10.1080/00207179.2019.1566640 doi: 10.1080/00207179.2019.1566640
    [10] L. Wang, D. Xu, Global exponential stability of Hopfield reaction-diffusion neural networks with time-varying delays, Sci. China Ser. F, 46 (2003), 466–474. https://doi.org/10.1016/j.neunet.2019.12.016 doi: 10.1016/j.neunet.2019.12.016
    [11] J. Wang, X. Zhang, H. Wu, T. Huang, Q. Wang, Finite-time passivity and synchronization of coupled reaction-diffusion neural networks with multiple weights, IEEE Trans. Cybern., 49 (2018), 3385–3397. https://doi.org/10.1109/TCYB.2018.2842437 doi: 10.1109/TCYB.2018.2842437
    [12] L. Duan, L. Huang, Z. Guo, X. Fang, Periodic attractor for reaction-diffusion high-order Hopfield neural networks with time-varying delays, Comput. Math. Appl., 73 (2017), 233–245. https://doi.org/10.1016/j.camwa.2016.11.010 doi: 10.1016/j.camwa.2016.11.010
    [13] J. Wang, H. Wu, L. Guo, Passivity and stability analysis of reaction-diffusion neural networks with Dirichlet boundary conditions, IEEE Trans. Neural Networks, 22 (2011), 2105–2116. https://doi.org/10.1109/TNN.2011.2170096 doi: 10.1109/TNN.2011.2170096
    [14] J. Wang, H. Wu, T. Huang, S. Ren, Passivity and synchronization of linearly coupled reaction-diffusion neural networks with adaptive coupling, IEEE Trans. Cybern., 45 (2014), 1942–1952. https://doi.org/10.1109/TCYB.2014.2362655 doi: 10.1109/TCYB.2014.2362655
    [15] L. Shanmugam, P. Mani, R. Rajan, Y. H. Joo, Adaptive synchronization of reaction-diffusion neural networks and its application to secure communication, IEEE Trans. Cybern., 50 (2018), 911–922. https://doi.org/10.1109/TCYB.2018.2877410 doi: 10.1109/TCYB.2018.2877410
    [16] S. Wang, Z. Guo, S. Wen, T. Huang, Gloabl synchronization of coupled delayed memristive reaction-diffusion neural networks, Neural Networks, 123 (2020), 362–371. https://doi.org/10.1016/j.neunet.2019.12.016 doi: 10.1016/j.neunet.2019.12.016
    [17] J. Cheng, Pinning-controlled synchronization of partially coupled dynamical networks via impulsive control, AIMS Math., 7 (2022), 143–155. https://doi.org/10.3934/math.2022008 doi: 10.3934/math.2022008
    [18] Y. Huang, J. Hou, E. Yang, General decay lag anti-synchronization of multi-weighted delayed coupled neural networks with reaction-diffusion terms, Inf. Sci., 511 (2020), 36–57. https://doi.org/10.1016/j.ins.2019.09.045 doi: 10.1016/j.ins.2019.09.045
    [19] S. Bhat, D. Bernstein, Finite time stability of homogeneous systems, Proc. Amer. Control Conf., 1997, 2513–2514. https://doi.org/10.1109/ACC.1997.609245
    [20] L. Duan, M. Shi, C. Huang, M. Fang, New results on finite-time synchronization of delayed fuzzy neural networks with inertial effects, Int. J. Fuzzy Syst., 24 (2022), 676–685. https://doi.org/10.1007/s40815-021-01171-1 doi: 10.1007/s40815-021-01171-1
    [21] A. Polyakov, Nonlinear feedback design for fixed-time stabilization of linear control systems, IEEE Trans. Autom. Control, 57 (2012), 2106–2110. https://doi.org/10.1109/TAC.2011.2179869 doi: 10.1109/TAC.2011.2179869
    [22] C. Aouiti, E. Assali, Y. Foutayeni, Finite-time and fixed-time synchronization of inertial Cohen-Grossberg-type neural networks with time varying delays, Neural Process. Lett., 50 (2019), 2407–2436. https://doi.org/10.1007/s11063-019-10018-8 doi: 10.1007/s11063-019-10018-8
    [23] J. Xiao, Z. Zeng, S. Wen, A. Wu, L. Wang, A unified framework design for finite-time and fixed-time synchronization of discontinuous neural networks, IEEE Trans. Cybern., 51 (2019), 3004–3016. https://doi.org/10.1109/TCYB.2019.2957398 doi: 10.1109/TCYB.2019.2957398
    [24] X. Liu, D. W. C. Ho, Q. Song, W. Xu, Finite/fixed-time pinning synchronization of complex networks with stochastic disturbances, IEEE Trans. Cybern., 49 (2018), 2398–2403. https://doi.org/10.1109/TCYB.2018.2821119 doi: 10.1109/TCYB.2018.2821119
    [25] Q. Wang, L. Duan, H. Wei, L. Wang, Finite-time anti-synchronisation of delayed Hopfield neural networks with discontinuous activations, Int. J. Control, 2021. https://doi.org/10.1080/00207179.2021.1912396
    [26] L. Duan, M. Shi, L. Huang, New results on finite-/fixed-time synchronization of delayed diffusive fuzzy HNNs with discontinuous activations, Fuzzy Sets Syst., 416 (2021), 141–151. https://doi.org/10.1016/j.fss.2020.04.016 doi: 10.1016/j.fss.2020.04.016
    [27] X. Liu, D. Ho, Q. Song, J. Cao, Finite-/fixed-time robust stabilization of switched discontinuous systems with disturbances, Nonlinear Dyn. 90 (2017), 2057–2068. https://doi.org/10.1007/s11071-017-3782-9
    [28] S. Wang, Z. Guo, S. Wen. T. Huang, S. Gong, Finite/fixed-time synchronization of delyed memrisitive reaction-diffusion neural networks, Neurocomputing, 375 (2020), 1–8. https://doi.org/10.1016/j.neucom.2019.06.092 doi: 10.1016/j.neucom.2019.06.092
    [29] L. M. Pecora, T. L. Carroll, Synchronization in chaotic systems, Phys. Rev. Lett., 64 (1990), 821. https://doi.org/10.1103/PhysRevLett.64.821 doi: 10.1103/PhysRevLett.64.821
    [30] J. Zhou, S. Xu, H. Shen, B. Zhang, Passivity analysis for uncertain BAM neural networks with time delays and reaction-diffusions, Int. J. Syst. Sci., 44 (2013), 1494–1503. https://doi.org/10.1080/00207721.2012.659693 doi: 10.1080/00207721.2012.659693
  • This article has been cited by:

    1. Liyun Zeng, Rita Yi Man Li, Tan Yigitcanlar, Huiling Zeng, Public Opinion Mining on Construction Health and Safety: Latent Dirichlet Allocation Approach, 2023, 13, 2075-5309, 927, 10.3390/buildings13040927
    2. Houssem Ben Khalfallah, Mariem Jelassi, Narjes Bellamine Ben Saoud, Jacques Demongeot, 2023, Chapter 19-2, 978-3-319-12125-3, 1, 10.1007/978-3-319-12125-3_19-2
    3. Houssem Ben Khalfallah, Mariem Jelassi, Narjes Bellamine Ben Saoud, Jacques Demongeot, 2023, Chapter 19, 978-3-031-40115-2, 229, 10.1007/978-3-031-40116-9_19
    4. Muhammad Hussain, Ioanna Iacovides, Tom Lawton, Vishal Sharma, Zoe Porter, Alice Cunningham, Ibrahim Habli, Shireen Hickey, Yan Jia, Phillip Morgan, Nee Ling Wong, 2024, Development and translation of human-AI interaction models into working prototypes for clinical decision-making, 9798400705830, 1607, 10.1145/3643834.3660697
    5. Mouin Jammal, Antoine Saab, Cynthia Abi Khalil, Charbel Mourad, Rosy Tsopra, Melody Saikali, Jean-Baptiste Lamy, Impact on clinical guideline adherence of Orient-COVID, a clinical decision support system based on dynamic decision trees for COVID19 management: a randomized simulation trial with medical trainees, 2024, 13865056, 105772, 10.1016/j.ijmedinf.2024.105772
    6. Ourania Manta, Nikolaos Vasileiou, Olympia Giannakopoulou, Konstantinos Bromis, Konstantinos Georgas, Theodoros P. Vagenas, Ioannis Kouris, Maria Haritou, George Matsopoulos, Dimitris Koutsouris, 2024, TeleRehaB DSS Project: Advancing Balance Rehabilitation Through Digital Health Technologies, 979-8-3503-6243-5, 1, 10.1109/ICE/ITMC61926.2024.10794240
    7. Houssem Ben Khalfallah, Mariem Jelassi, Jacques Demongeot, Narjès Bellamine Ben Saoud, Advancements in Predictive Analytics: Machine Learning Approaches to Estimating Length of Stay and Mortality in Sepsis, 2025, 13, 2079-3197, 8, 10.3390/computation13010008
    8. Divya Divya, Savita Savita, Sandeepa Kaur, Unveiling excellence in Indian healthcare: a patient-centric PRISMA analysis of hospital service quality, patient satisfaction and loyalty, 2025, 1750-6123, 10.1108/IJPHM-05-2024-0043
    9. Regina Silva, Luis Gomes, An adaptive language model-based intelligent medication assistant for the decision support of antidepressant prescriptions, 2025, 190, 00104825, 110065, 10.1016/j.compbiomed.2025.110065
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1471) PDF downloads(88) Cited by(1)

Figures and Tables

Figures(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog