Processing math: 100%
Research article Special Issues

Multi-stability analysis of fractional-order quaternion-valued neural networks with time delay

  • This paper addresses the problem of multi-stability analysis for fractional-order quaternion-valued neural networks (QVNNs) with time delay. Based on the geometrical properties of activation functions and intermediate value theorem, some conditions are derived for the existence of at least (2KRp+1)n,(2KIp+1)n,(2KJp+1)n,(2KKp+1)n equilibrium points, in which [(KRp+1)]n,[(KIp+1)]n,[(KJp+1)]n,[(KKp+1)]n of them are uniformly stable while the other equilibrium points become unstable. Thus the developed results show that the QVNNs can have more generalized properties than the real-valued neural networks (RVNNs) or complex-valued neural networks (CVNNs). Finally, two simulation results are given to illustrate the effectiveness and validity of our obtained theoretical results.

    Citation: S. Kathiresan, Ardak Kashkynbayev, K. Janani, R. Rakkiyappan. Multi-stability analysis of fractional-order quaternion-valued neural networks with time delay[J]. AIMS Mathematics, 2022, 7(3): 3603-3629. doi: 10.3934/math.2022199

    Related Papers:

    [1] R. Sriraman, P. Vignesh, V. C. Amritha, G. Rachakit, Prasanalakshmi Balaji . Direct quaternion method-based stability criteria for quaternion-valued Takagi-Sugeno fuzzy BAM delayed neural networks using quaternion-valued Wirtinger-based integral inequality. AIMS Mathematics, 2023, 8(5): 10486-10512. doi: 10.3934/math.2023532
    [2] Chen Wang, Hai Zhang, Hongmei Zhang, Weiwei Zhang . Globally projective synchronization for Caputo fractional quaternion-valued neural networks with discrete and distributed delays. AIMS Mathematics, 2021, 6(12): 14000-14012. doi: 10.3934/math.2021809
    [3] Ailing Li, Mengting Lv, Yifang Yan . Asymptotic stability for quaternion-valued BAM neural networks via a contradictory method and two Lyapunov functionals. AIMS Mathematics, 2022, 7(5): 8206-8223. doi: 10.3934/math.2022457
    [4] Nina Huo, Bing Li, Yongkun Li . Global exponential stability and existence of almost periodic solutions in distribution for Clifford-valued stochastic high-order Hopfield neural networks with time-varying delays. AIMS Mathematics, 2022, 7(3): 3653-3679. doi: 10.3934/math.2022202
    [5] R. Sriraman, R. Samidurai, V. C. Amritha, G. Rachakit, Prasanalakshmi Balaji . System decomposition-based stability criteria for Takagi-Sugeno fuzzy uncertain stochastic delayed neural networks in quaternion field. AIMS Mathematics, 2023, 8(5): 11589-11616. doi: 10.3934/math.2023587
    [6] Xiaofang Meng, Yongkun Li . Pseudo almost periodic solutions for quaternion-valued high-order Hopfield neural networks with time-varying delays and leakage delays on time scales. AIMS Mathematics, 2021, 6(9): 10070-10091. doi: 10.3934/math.2021585
    [7] Ivanka Stamova, Gani Stamov . Impulsive control strategy for the Mittag-Leffler synchronization of fractional-order neural networks with mixed bounded and unbounded delays. AIMS Mathematics, 2021, 6(3): 2287-2303. doi: 10.3934/math.2021138
    [8] Jin Gao, Lihua Dai . Anti-periodic synchronization of quaternion-valued high-order Hopfield neural networks with delays. AIMS Mathematics, 2022, 7(8): 14051-14075. doi: 10.3934/math.2022775
    [9] Yuehong Zhang, Zhiying Li, Wangdong Jiang, Wei Liu . The stability of anti-periodic solutions for fractional-order inertial BAM neural networks with time-delays. AIMS Mathematics, 2023, 8(3): 6176-6190. doi: 10.3934/math.2023312
    [10] Tiecheng Zhang, Liyan Wang, Yuan Zhang, Jiangtao Deng . Finite-time stability for fractional-order fuzzy neural network with mixed delays and inertial terms. AIMS Mathematics, 2024, 9(7): 19176-19194. doi: 10.3934/math.2024935
  • This paper addresses the problem of multi-stability analysis for fractional-order quaternion-valued neural networks (QVNNs) with time delay. Based on the geometrical properties of activation functions and intermediate value theorem, some conditions are derived for the existence of at least (2KRp+1)n,(2KIp+1)n,(2KJp+1)n,(2KKp+1)n equilibrium points, in which [(KRp+1)]n,[(KIp+1)]n,[(KJp+1)]n,[(KKp+1)]n of them are uniformly stable while the other equilibrium points become unstable. Thus the developed results show that the QVNNs can have more generalized properties than the real-valued neural networks (RVNNs) or complex-valued neural networks (CVNNs). Finally, two simulation results are given to illustrate the effectiveness and validity of our obtained theoretical results.



    In recent years, fast development of fractional calculus has found extensive applications in different fields that possess dynamical nature and uncertain behaviors. Such applications include biological modelling, control theory, engineering etc., see for example [1,2] and references therein, which motivates the scientists and researchers to concentrate on the analysis of fractional-order characteristics into the systems. Specifically, for the dynamic neural network like systems, the incorporation of the fractional-order neural networks in [3,4] has gained desired better results than the integer-order ones in [5,6]. This is because, the fractional-order derivatives exhibit an excellent memory and hereditary properties in representing the network model. Following based on the theory of fractional-order differential systems with discontinuous right hand sides (RHS), there have been plenty of results made for stability and synchronization in the literature [7,8,9]. Fractional-order LANE-EMDEN systems have been solved using multiple techniques in [10,11,12,13,14] and the dynamic analysis of a novel discrete fractional model for COVID-19 was carried out by the authors of [15]. Caputo derivatives have been used widely for fractional-order derivatives. The major advantage of Caputo derivative when compared to the Grunwald-Letnikov derivatives and the Riemann-Liouville derivatives is that they consider the initial value similar to the integer order differential equations (i.e., ) they include the lowest terminal t = a limit values of integer-order derivatives of unknown functions [16,17]. In [18], the left caputo fractional derivative has been considered to solve the fractional order problems. The authors of [19] used Caputo-Fabrizio derivative to handle fractional partial differential equations. The authors of [20] used the concept of caputo derivatives to evaluate fractional Burgers equation via the Crank-Nicolson finite difference method. Also, in [21,22], the authors used caputo integrals to evaluate certain inequalities. The main advantage of the currently used operator than the previously used operators is that it can be used to solve fractional order QVNN problems. The problem of the RVNNs and the CVNNs have gained much interest, because of their wide applications in the field of science and engineering such as, optimization computation, image processing, parallel computation, pattern recognition, computational optimization (see for instance [23,24,25,26,27]) and were availed to investigate the dynamical properties of the fractional-order nonlinear systems. The main difference between them is that the CVNNs process information using complex-valued parameters and variables; whereas, the RVNNs use real valued parameters and variable to process information. The QVNNs are an extension of the RVNNs or complex-valued systems, and due to the non commutativity of the quaternion algebra (see [28,29,30,31,32,33]), the quaternion problems are more difficult than that of real-valued systems or complex-valued systems, which is the reason for the slow development of quaternion fields. Various techniques have been used to analyse varied dynamical properties of the QVNNs for instance semidiscretization technique was used in [28], the authors of [32] used the inequality technique and in [33], the authors used the Lyapunov-Krasovskii functional. This field has received more increasing interest recently, due to its applications in information processing, optimization and automatic control [34,35].

    Moreover, according to various topological structures, the dynamical properties of the neural networks systems may experience different dynamical behaviors and hence the problem of stability analysis (see [36,37,38,39,40,41,42,43]), synchronization and bifurcations, were derived in [3] and [44]. Various types of synchronization such as finite-time synchronization using non-separation method [45] and global asymptotical and Mittag-Leffler synchronization with delays [46] has been carried out for QVNN. In general, the considered quaternion-valued networks tend to converge to an equilibrium point resulting in a periodic orbit or a chaotic trajectory. As one of the classical phenomenon of dynamic neural networks, the multi-stability analysis has been extensively studied in [7,42,47,48]. It is known, that the multi-stability of the designed neural networks is the great requirement in various applications, such as optimal computations, associative memory, and pattern recognition. Especially in pattern recognition, the system can converge to a certain stable equilibrium point for the process of memory attainment where the pattern is stored as binary vectors. Thus, it is more important to analyze the existence of multiple equilibria and their stable points. In [7], the sigmoid functions are employed in a class of recurrent neural networks for the existence of (2Ki+1)n equilibrium points and stability of (Ki+1)n equilibrium points. In [47], a class of integer-order recurrent neural networks with unbounded time varying delays were introduced, and by using the geometrical properties of non-monotonic activation functions it was showed that the addressed system have exactly (2Ki+1)n equilibrium points, of which (Ki+1) were locally asymptotically stable while others are unstable. One of the main novelty in this paper is to extend the results of multi stability of integer-order RVNNs or CVNNs, to those of fractional-order QVNNs, in order to show that there exists more equilibrium points in QVNNs than RVNNs, thus making this work to be different from most existing works with integer-order ones. This paper is devoted to presenting a theoretical multi stability analysis for fractional-order QVNNs with time delay.

    Motivated by the above discussion, we explore the study of multiple stability results on fractional-order QVNNs with time delay. First, an n-dimensional QVNNs can be converted into a 4n-dimensional RVNNs system by using the decomposition and non commutativity properties of quaternions. Then some sufficient conditions are derived for the fractional-order nonlinear systems for the existence of equilibrium points [(2KRp+1)]n,[(2KIp+1)]n,[(2KJp+1)]n,[(2KKp+1)]n. Finally, two numerical examples are given to demonstrate the effectiveness of the theoretical results.

    The major contributions of this article include conversion of a n-dimensional QVNNs with time delays into a 4n-dimensional RVNNs system with the help of decomposition and non commutativity properties. Also, multi-stability analysis for the same has been carried out and we have obtained a conclusion that [(2KRp+1)]n,[(2KIp+1)]n,[(2KJp+1)]n,[(2KKp+1)]n of the equilibrium points are uniformly stable while the other equilibrium points among (2KRp+1)n,(2KIp+1)n,(2KJp+1)n,(2KKp+1)n of the equilibrium points become unstable. Two numerical simulation results are also provided to validate the obtained results. To the best of our knowledge, this multiple stability analysis for fractional order QVNNs with time delays has not yet been presented which sums up the novelty of our work.

    The paper is organised as follows: Section 2 details the preliminary definitions and assumptions required. In Section 3, the main results and theorems are proposed for the multi-stability condition. Numerical simulation to explain the effectiveness of the proposed results are provided in Section 4. The results of the paper are encapsulated in Section 5 to provide a proper conclusion.

    The notations used in this paper are as follows: The real field, the complex field and the skew field of quaternions are denoted as R, C and Q, respectively.

    In this section, we will present some important definitions and Lemmas of fractional calculus which helps to prove the main results.

    Definition 2.1. [17,49] The fractional integral of order υ>0 for a function f(t) is defined as

    Iυf(t)=1Γ(υ)tt0f(s)(ts)1υds,tt0.

    Γ(υ) is Gamma function and it is denoted as Γ(υ)=0ett1υdt.

    Definition 2.2. [17,49] The Caputo fractional derivative of function f(t)Cn([t0,+],R) with order υ>0 is defined as

    Ct0Dυtf(t)=1Γ(nυ)tt0(ts)nυ1dnf(s)dsds,

    where tt0 and n is a positive integer such that n1<υ<n. Especially, when 0<υ<1,

    Ct0Dυtf(t)=1Γ(1υ)tt0(ts)υd1f(s)dsds.

    For convenience, for rest of the paper, we adopt the notion Dυ to denote the Caputo fractional derivative operator Ct0Dυt.

    Consider the following fractional-order QVNNs with time delay as follows:

    Dυhp(t)=dphp(t)+nq=1apqfq(hq(t))+nq=1bpqgq(hq(tτ))+Rp, (2.1)

    or equivalently

    Dυh(t)=Dh(t)+Af(h(t))+Bg(h(tτ))+R, (2.2)

    where p=1,2,,n, h(t)=(h1(t),h2(t),,hn(t))TQn is the state vector of neurons at time t; D=diag{d1,d2,,dn}Qn×n is a positive diagonal matrix; f(h(t)),g(h(tτ)) denotes neuron activation functions without and with time delays respectively; τ denotes the constant time delay and satisfies τ>0; AQn×n,BQn×n are the interconnection matrices without and with time delay respectively; R=(R1,R2,,Rn)TQn is an external input. It follows from the non-commutativity of quaternion multiplication resulting from Hamilton rules: i2=j2=k2=ijk=1,ij=ji=k,jk=kj=i,ki=ik=j, we can write (2.2) as the following four real valued neural networks (2.3).

    {DυhR(t)=DhR(t)+ARfR(hR(t))AIfI(hI(t))AJfJ(hJ(t))AKfK(hK(t))+BRfR(hR(tτ))BIfI(hI(tτ))BJfJ(hJ(tτ))BKfK(hK(tτ))+RR,DυhI(t)=DhI(t)+ARfI(hI(t))+AIfR(hR(t))+AJfK(hK(t))AKfJ(hJ(t))+BRfI(hI(tτ))+BIfR(hR(tτ))+BJfK(hK(tτ))BKfJ(hJ(tτ))+RI,DυhJ(t)=DhJ(t)+ARfJ(hJ(t))AIfK(hK(t))+AJfR(hR(t))+AKfI(zI(t))+BRfJ(hJ(tτ))BIfK(hK(tτ))+BJfR(hR(tτ))+BKfI(zI(tτ))+RJ,DυhK(t)=DhK(t)+ARfK(hK(t))+AIfJ(hJ(t))AJfI(hI(t))+AKfR(hR(t))+BRfK(hK(tτ))+BIfJ(hJ(tτ))BJfI(hI(tτ))+BKfR(hR(tτ))+RK. (2.3)

    Definition 2.3. [2] The equilibrium point of (2.2) is said to be stable if for any ε>0, there exists λ(t0,ε)>0, such that ||ψh||<λ implies ||h(t)h||<ε for any tt00. It is stable if λ is independent of t0.

    Assumption 2.4. The activation functions fRp(hRp),fIp(hIp),fJp(hJp),fKp(hKp) are continuous and differentiable and there exist constants mRp<MRp,mIp<MIp,mJp<MJp,mKp<MKp, for p=1,2,,n,=R,I,J,K such that

    limhpfp(hp)=mp,limhp+fp(hp)=Mp.

    Also, there exists constants

    ˉσ(0)p<ˉβ(0)p<ˉσ(1)p<ˉβ(1)p<ˉσ(KRp1)p<ˉβ(KRp1)p<ˉσ(KRp)p<ˉβ(KRp)p+,

    ˜σ(0)p<˜β(0)p<˜σ(1)p<˜β(1)p<˜σ(KIp1)p<˜β(KIp1)p<˜σ(KIp)p<˜β(KIp)p+,

    ˇσ(0)p<ˇβ(0)p<ˇσ(1)p<ˇβ(1)p<ˇσ(KJp1)p<ˇβ(KJp1)p<ˇσ(KJp)p<ˇβ(KJp)p+,

    ˆσ(0)p<ˆβ(0)p<ˆσ(1)p<ˆβ(1)p<ˆσ(KKp1)p<ˆβ(KKp1)p<ˆσ(KKp)p<ˆβ(KKp)p+,

    δνRq,δνIq,δνJq,δνKq,ˉδνRq,ˉδνIq,ˉδνJq,ˉδνKq,r=0,1,2,,Kνp,s=1,2,,Kνp such that for p,q=1,,n,

    |fνq(hR1,hI1,hJ1,hK1)fνq(hR2,hI2,hJ2,hK2)|δνRq|hR1hR2|+δνIq|hI1hI2|+δνJq|hJ1hJ2|+δνKq|hK1hK2| for any h(ν)1,h(ν)2(σ(r)p,β(r)p), |fνq(hR1,hI1,hJ1,hK1)fνq(hR2,hI2,hJ2,hK2)|ˉδνRq|hR1hR2|+ˉδνIq|hI1hI2|+ˉδνJq|hJ1hJ2|+ˉδνKq|hK1hK2| for any h(ν)1,h(ν)2[β(s1)p,σ(s)p], for (ν=R,σ(r)p=ˉσ(r)p,β(r)p=ˉβ(r)p,σ(s)p=ˉσ(s)p,β(s1)p=ˉβ(s1)p),(ν=I,σ(r)p=˜σ(r)p,β(r)p=˜β(r)p,σ(s)p=˜σ(s)p,β(s1)p = ˜β(s1)p),(ν=J,σ(r)p=ˇσ(r)p,β(r)p=ˇβ(r)p,σ(s)p=ˇσ(s)p,β(s1)p=ˇβ(s1)p),(ν=K,σ(r)p=ˆσ(r)p,β(r)p=ˆβ(r)p,σ(s)p=ˆσ(s)p,β(s1)p=ˆβ(s1)p). Let us define the following subsets S1=(ˉσ(r)p,ˉβ(r)p)×(˜σ(r)p,˜β(r)p)×(ˇσ(r)p,ˇβ(r)p)×(ˆσ(r)p,ˆβ(r)p),S2=[ˉβ(s1)p,ˉσ(s)p]×[˜β(s1)p,˜σ(s)p]×[ˇβ(s1)p,ˇσ(s)p]×[ˆβ(s1)p,ˆσ(s)p].

    To determine the number of multiple equilibrium points, we define the following bounding functions:

    ˉFp(uR)=dpuR+(aRpp+bRpq)fRp(uR)+RRp+nq=1,qpmin{(aRpq+bRpq)mRq,(aRpq+bRpq)MRq}nq=1{max{(aIpq+bIpq)mIq,(aIpq+bIpq)MIq}max{(aJpq+bJpq)mJq,(aJpq+bJpq)MJq}max{(aKpq+bKpq)mKq,(aKpq+bKpq)MKq}},ˉF+p(uR)=dpuR+(aRpp+bRpq)fRp(uR)+RRp+nq=1,qpmax{(aRpq+bRpq)mRq,(aRpq+bRpq)MRq}nq=1{min{(aIpq+bIpq)mIq,(aIpq+bIpq)MIq}min{(aJpq+bJpq)mJq,(aJpq+bJpq)MJq}min{(aKpq+bKpq)mKq,(apq+bKpq)MKq}}.

    and ˉFp(uI),ˉFp(uJ),ˉFp(uK),ˉF+p(uI),ˉF+p(uJ),ˉF+p(uK) are defined similarly.

    From Assumption 2.4 and the conditions ˉFp(u) and ˉF+p(u) are continuous, it follows that

    limuˉFp(u)=+,limu+ˉF+p(u)=. (2.4)

    We consider the constants ˉσ(0)p,ˉβ(KRp)p,˜σ(0)p,˜β(KIp)p,ˇσ(0)p,ˇβ(KJp)p,ˆσ(0)p,ˆβ(KKp)p such that for p=1,2,,n, ˉF+p(uR)ˉFp(uR)>0,ˉF+p(uI)ˉFp(uI)>0, ˉF+p(uJ)ˉFp(uJ)>0,ˉF+p(uK)ˉFp(uK)>0, ˉFp(vR)ˉF+p(vR)<0,ˉFp(vI)ˉF+p(vI)<0, ˉFp(vJ)ˉF+p(vJ)<0,ˉFp(vK)ˉF+p(vK)<0, for all uRˉσ(0)p,uI˜σ(0)p,uJˇσ(0)p, uKˆσ(0)p,vRˉβ(KRp)p,vI˜β(KIp)p,vJˇβ(KJp)p,vKˆβ(KKp)p.

    For any given interval HR, and let H0=,H1=H. For convenience, we denote

    (ˉσ(0)p,ˉβ(0)p)=(ˉσ(0)p,ˉβ(0)p)1[ˉβ(0)p,ˉσ(1)p]0[ˉβ(KRp1)p,ˉσ(KRp)p]0(ˉσ(KRp)p,ˉβ(KRp)p)0,(ˉβ(KRp1)p,ˉσ(0)p)=(ˉσ(0)p,ˉβ(0)p)0[ˉβ(r1)p,ˉσ(1)p]1(ˉσ(r)p,ˉβ(r)p)0(ˉσ(KRp)p,ˉβ(KRp)p)0,(ˉσ(KRp)p,ˉβ(KRp)p)=(ˉσ(0)p,ˉβ(0)p)0[ˉβ(r1)p,ˉσ(r)p]0(ˉσ(r)p,ˉβ(r)p)0(ˉσ(KRp)p,ˉβ(KRp)p)1,(˜σ(0)p,˜β(0)p)=(˜σ(0)p,˜β(0)p)1[˜β(0)p,˜σ(1)p]0[˜β(KIp1)p,˜σ(KIp)p]0(˜σ(KIp)p,˜β(KIp)p)0,(˜β(KIp1)p,˜σ(0)p)=(˜σ(0)p,˜β(0)p)0[˜β(r1)p,˜σ(1)p]1(˜σ(r)p,˜β(r)p)0(˜σ(KIp)p,˜β(KIp)p)0,(˜σ(KIp)p,˜β(KIp)p)=(˜σ(0)p,˜β(0)p)0[˜β(r1)p,˜σ(r)p]0(˜σ(r)p,˜β(r)p)0(˜σ(KIp)p,˜β(KIp)p)1,(ˇσ(0)p,ˇβ(0)p)=(ˇσ(0)p,ˇβ(0)p)1[ˇβ(0)p,ˇσ(1)p]0[ˇβ(KJp1)p,ˇσ(KJp)p]0(ˇσ(KJp)p,ˇβ(KJp)p)0,(ˇβ(KJp1)p,ˇσ(0)p)=(ˇσ(0)p,ˇβ(0)p)0[ˇβ(r1)p,ˇσ(1)p]1(ˇσ(r)p,ˇβ(r)p)0(ˇσ(KJp)p,ˇβ(KJp)p)0,(ˇσ(KJp)p,ˇβ(KJp)p)=(ˇσ(0)p,ˇβ(0)p)0[ˇβ(r1)p,ˇσ(r)p]0(ˇσ(r)p,ˇβ(r)p)0(ˇσ(KJp)p,ˇβ(KJp)p)1,(ˆσ(0)p,ˆβ(0)p)=(ˆσ(0)p,ˆβ(0)p)1[ˆβ(0)p,ˆσ(1)p]0[ˆβ(KKp1)p,ˆσ(KKp)p]0(ˆσ(KKp)p,ˆβ(KKp)p)0,(ˆβ(KKp1)p,ˆσ(0)p)=(ˆσ(0)p,ˆβ(0)p)0[ˆβ(r1)p,ˆσ(1)p]1(ˆσ(r)p,ˆβ(r)p)0(ˆσ(KKp)p,ˆβ(KKp)p)0,(ˆσ(KKp)p,ˆβ(KKp)p)=(ˆσ(0)p,ˆβ(0)p)0[ˆβ(r1)p,ˆσ(r)p]0(ˆσ(r)p,ˆβ(r)p)0(ˆσ(KKp)p,ˆβ(KKp)p)1,

    for r=1,2,,Kνp,p=1,2,,n (ν=R,I,J,K).

    Then the region np=1(ˉσ(0)p, ˉβ(KRp)p), np=1(˜σ(0)p,˜β(KIp)p), np=1(ˇσ(0)p, ˇβ(KJp)p) and np=1(ˆσ(0)p, ˆβ(KKp)p) can be divided into np=1(2KRp+1), np=1(2KIp+1),np=1(2KJp+1) and np=1(2KKp+1) subsets respectively. Define the following set for any positive integer N,Υ(Kνp+1)={Υr|r=1,Kνp+1} with Υr=(ξ1,,ξN)T such that ξr=1,ξp=0 when pr, for r,p=1,,N. Then, take ˉΥ=Υ(KRp+1),˜Υ=Υ(KIp+1),ˇΥ=Υ(KJp+1),ˆΥ=Υ(KKp+1). Therefore, Qn can be divided into the following four regions as follows:

    ˉΩ={np=1(ˉσ(0)p,ˉβ(0)p)θ(p)1(KRpr=1[ˉβ(r1)p,ˉσ(r)p]θ(p)2r(ˉσ(r)p,ˉβ(r)p)θ(p)2r+1),θ=(θ(p)1,,θ(p)2KRp+1)TˉΥ},˜Ω={np=1(˜σ(0)p,˜β(0)p)θ(p)1(KIpr=1[˜β(r1)p,˜σ(r)p]θ(p)2r(˜σ(r)p,˜β(r)p)θ(p)2r+1),θ=(θ(p)1,,θ(p)2KIp+1)T˜Υ},ˇΩ={np=1(ˇσ(0)p,ˇβ(0)p)θ(p)1(KJpr=1[ˇβ(r1)p,ˇσ(r)p]θ(p)2r(ˇσ(r)p,ˇβ(r)p)θ(p)2r+1),θ=(θ(p)1,,θ(p)2KJp+1)TˇΥ},ˆΩ={np=1(ˆσ(0)p,ˆβ(0)p)θ(p)1(KKpr=1[ˆβ(r1)p,ˆσ(r)p]θ(p)2r(ˆσ(r)p,ˆβ(r)p)θ(p)2r+1),θ=(θ(p)1,,θ(p)2KKp+1)TˆΥ}.

    Therefore, it is not difficult to show that ˉΩ has (2KRp+1)n members and it can be divided into two subsets with members grouped as ˉΞ(one)=(KRp+1)n and ˉΞ(two)=(2KRp+1)n(KRp+1)n. ˜Ξ(one),ˇΞ(one),ˆΞ(one),˜Ξ(two),ˇΞ(two),ˆΞ(two) can be defined similarly.

    Remark 2.5. In order to achieve the stability results of the QVNNs, it can be separated into 2n-dimensional CVNNs with its real and imaginary parts and 4n-dimensional RVNNs. Then, the regions np=1(ˉσ(0)p,ˉβ(KRp)p),np=1(˜σ(0)p,˜β(KIp)p),np=1(ˇσ(0)p,ˇβ(KJp)p),np=1(ˆσ(0)p,ˆβ(KKp)p) can be divided into np=1(2KRp+1),np=1(2KIp+1),np=1(2KJp+1),np=1(2KKp+1) respectively, and each elements in these subsets have at least one equlibrium point, that is, the neural networks (2.3) has at least np=1(2KRp+1),np=1(2KIp+1),np=1(2KJp+1),np=1(2KKp+1) equilibrium points.

    Lemma 2.6. [7] If

    {FRp(ˉσ(r)p)>0,FR+p(ˉβ(r1)p)<0,FIp(˜σ(r)p)>0,FI+p(˜β(r1)p)<0,FJp(ˇσ(r)p)>0,FJ+p(ˇβ(r1)p)<0,FKp(ˆσ(r)p)>0,FK+p(ˆβ(r1)p)<0, (2.5)

    for p=1,2,,n and r=1,Kνp, then each subset ˉχθˉΩ,˜χθ˜Ω,ˇχθˇΩ,ˆχθˆΩ has at least one equilibrium point, and each ˉχˉθˉΞ(one),˜χˉθ˜Ξ(one),ˇχˉθˇΞ(one),ˆχˉθˆΞ(one) is positively invariant.

    Assumption 2.7. dp,apq,bpq,δq,ηq satisfies the following conditions:

    np=1M(A)+np=1M(B)<D,

    where,

    D=min(|1maxp(dp)|,minp(dp)),M(A)=maxqS1(|aRpq|δRνq)+maxqS1(|aRpq|δIνq)+maxqS1(|aRpq|δJνq)+maxqS1(|aRpq|δKνq)+maxqS1(|aIpq|δRνq)+maxqS1(|aIpq|δIνq)+maxqS1(|aIpq|δJνq)+maxqS1(|aIpq|δKνq)+maxqS1(|aJpq|δRνq)+maxqS1(|aJpq|δIνq)+maxqS1(|aJpq|δJνq)+maxqS1(|aJpq|δKνq)+maxqS1(|aKpq|δRνq)+maxqS1(|aKpq|δIνq)+maxqS1(|aKpq|δJνq)+maxqS1(|aKpq|δKνq)+maxqS2(|aRpq|δRνq)+maxqS2(|aRpq|ˉδIνq)+maxqS2(|aRpq|ˉδJνq)+maxqS2(|aRpq|ˉδKνq)+maxqS2(|aIpq|ˉδRνq)+maxqS2(|aIpq|ˉδIνq)+maxqS2(|aIpq|ˉδJνq)+maxqS2(|aIpq|ˉδKνq)+maxqS2(|aJpq|ˉδRνq)+maxqS2(|aJpq|ˉδIνq)+maxqS2(|aJpq|ˉδJνq)+maxqS2(|aJpq|ˉδKνq)+maxqS2(|aKpq|ˉδRνq)+maxqS2(|aKpq|ˉδIνq)+maxqS2(|aKpq|ˉδJνq)+maxqS2(|aKpq|ˉδKνq),M(B)=maxqS1(|bRpq|ηRνq)+maxqS1(|bRpq|ηIνq)+maxqS1(|bRpq|ηJνq)+maxqS1(|bRpq|ηKνq)+maxqS1(|bIpq|ηRνq)+maxqS1(|bIpq|ηIνq)+maxqS1(|bIpq|ηJνq)+maxqS1(|bIpq|ηKνq)+maxqS1(|bJpq|ηRνq)+maxqS1(|bJpq|ηIνq)+maxqS1(|bJpq|ηJνq)+maxqS1(|bJpq|ηKνq)+maxqS1(|bKpq|ηRνq)+maxqS1(|bKpq|ηIνq)+maxqS1(|bKpq|ηJνq)+maxqS1(|bKpq|ηKνq)+maxqS2(|bRpq|ˉηRνq)+maxqS2(|bRpq|ˉηIνq)+maxqS2(|bRpq|ˉηJνq)+maxqS2(|bRpq|ˉηKνq)+maxqS2(|bIpq|ˉηRνq)+maxqS2(|bIpq|ˉηIνq)+maxqS2(|bIpq|ˉηJνq)+maxqS2(|bIpq|ˉηKνq)+maxqS2(|bJpq|ˉηRνq)+maxqS2(|bJpq|ˉηIνq)+maxqS2(|bJpq|ˉηJνq)+maxqS2(|bJpq|ˉηKνq)+maxqS2(|bKpq|ˉηRνq)+maxqS2(|bKpq|ˉηIνq)+maxqS2(|bKpq|ˉηJνq)+maxqS2(|bKpq|ˉηKνq), ν=R,I,J,K.

    Here, define that hp is the equilibrium point of (2.1), then, Eq (2.1) can be written as

    0=dphp(t)+nq=1apqfq(hq(t))+nq=1bpqgq(hq(tτ))+Rp, (3.1)

    then making a coordinate transformation ep(t)=hp(t)hp, from (3.1) and (2.1) we obtain

    Dυhp(t)=dphp(t)+nq=1apqfq(hq(t))+nq=1bpqgq(hq(tτ))(dphp(t)+nq=1apqfq(hq(t))+nq=1bpqgq(hq(tτ))),Dυep(t)=dpep(t)+nq=1apqfq+nq=1bpqgτq, (3.2)

    where, fq=fq(hq(t))fq(hq),gτq=fq(hq(tτ))fq(hq). The initial condition of the given system (3.2) is given by

    e(s)=ψR(s)+iψI(s)+jψJ(s)+kψK(s),s[τ,0]. (3.3)

    Therefore, to find the stability results of (2.1), we can turn to study its equivalent system (3.2). We can write (3.2) as four real valued neural networks similar to (2.3). In order to facilitate the proof of the following theorem, we introduce some notations in Appendix A.

    Theorem 3.1. If Assumptions 2.4, 2.7 and Lemma 1 hold for neural networks (2.3), then there exists at least np=1(2KRp+1),np=1(2KIp+1),np=1(2KJp+1),np=1(2KKp+1) equilibrium points and np=1(KRp+1),np=1(KIp+1),np=1(KJp+1),np=1(KKp+1) of the equilibrium points located in each positively invariant sets ˉΩˉθˉΞ(one),˜Ωˉθ˜Ξ(one),ˇΩˉθˇΞ(one),ˆΩˉθˆΞ(one) respectively, are stable.

    Proof. Based on the properties of Caputo fractional derivatives, we can write the real part of (3.2) as

    DυeRp=dpeRp+nq=1[aRpqfRqaIpqfIqaJpqfJqaKpqfKq]+nq=1[bRpqgRτqbIpqgIτqbJpqgJτqbKpqgKτq],eRp=Dυ[dpeRp+nq=1[aRpqfRqaIpqfIqaJpqfJq]+nq=1[aKpqfKq+bRpqgRτqbIpqgIτqbJpqgJτqbKpqgKτq]],=1Γ(υ)t01(ts)1υ[dpeRp+nq=1[aRpqfRqaIpqfIqaJpqfJqaKpqfKq]+nq=1[bRpqgRτqbIpqgIτqbJpqgJτqbKpqgKτq]]ds.

    Multiplying et and taking absolute value on both sides, we have

    et|eRp|=1Γ(υ)t0et(ts)1υ[dp|eRp|+nq=1|aRpq||fRq|+nq=1|aIpq||fIq|+nq=1|aJpq||fJq|nq=1|aKpq||fKq|+nq=1|bRpq||gRτq|+nq=1|bIpq||gIτq|+nq=1|bJpq||gJτq|+nq=1||bKpq||gKτq|]ds,dpt0I1|eRp|ds++nq=1|aRpq|t0I1[δRRq|eRq|+δRIq|eIq|+δRJq|eJq|+δRKq|eKq|]ds+nq=1|aIpq|t0I1[δIRq|eRq|+δIIq|eIq|+δIJq|eJq|+δIKq|eKq|]ds+nq=1|aJpq|×t0I1[δJRq|eRq|+δJIq|eIq|+δJJq|eJq|+δJKq|eKq|]ds+nq=1|aKpq|t0I1[δKRq|eRq|+δKIq|eIq|+δKJq|eJq|+δKKq|eKq|]ds+nq=1|bRpq|t0I2[ηRRq|eRτq|+ηRIq|eIτq|+ηRJq|eJτq|+ηRKq|eKτq|]ds+nq=1|bIpq|t0I2[ηIRq|eRτq|+ηIIq|eIτq|+ηIJq|eJτq|+ηIKq|eKτq|]ds+nq=1|bJpq|×t0I2[ηJRq|eRτq|+ηJIq|eIτq|+ηJJq|eJτq|+ηJKq|eKτq|]ds+nq=1||bKpq|t0I2[ηKRq|eRτq|+ηKIq|eIτq|+ηKJq|eJτq|+ηKKq|eKτq|]ds+nq=1|bRpq|t0I3[ηRRq|ψRτq|+ηRIq|ψIτq|+ηRJq|ψJτq|+ηRKq|ψKτq|]ds+nq=1|bIpq|t0I2[ηIRq|ψRτq|+ηIIq|ψIτq|+ηIJq|ψJτq|+ηIKq|ψKτq|]ds+nq=1|bJpq|×t0I2[ηJRq|ψRτq|+ηJIq|ψIτq|+ηJJq|ψJτq|+ηJKq|ψKτq|]ds+nq=1||bKpq|t0I2[ηKRq|ψRτq|+ηKIq|ψIτq|+ηKJq|ψJτq|+ηKKq|ψKτq|]ds,=dp|˜e(1)p|+4l=1[ˉA(l)p|˜e(l)q|+ˉB(l)p|ˉe(l)q|+ˉB(l)p|ˉψ(l)q|],=dp|˜e(1)p|+4l=1[ˉA(l)p|˜e(l)q|+ˉB(l)p|ˆe(l)q|+ˉB(l)p|ˆψ(l)q|],=maxp(dp)|s(ˇe(1)p)|+4l=1[ˉA(l)p|s(ˇe(l)q)|+ˉB(l)p|s(e(l)q)|+ˉB(l)p|s(ψ(l)q)|],||eR||nq=1supt(et|eRq|),[maxp(dp)+MR1]||eR||+MR2||eI||+MR3||eJ||+MR4||eK||+ˉMR1||ψR||+ˉMR2||ψI||+ˉMR3||ψJ||+ˉMR4||ψK||. (3.4)

    From (3.4), one obtains

    ||eR||NR1||eI||+NR2||eJ||+NR3||eK||+ˉNR1||ψR||+ˉNR2||ψI||+ˉNR3||ψJ||+ˉNR4||ψK||. (3.5)

    Similar to the proof of ||eR||, we can estimate ||eI||,||eJ||,||eK|| but are omitted here for space saving. Like ||eR||, we have

    ||eI||NI1||eR||+NI2||eJ||+NI3||eK||+ˉNI1||ψI||+ˉNI2||ψI||+ˉNI3||ψJ||+ˉNI4||ψK||, (3.6)
    ||eJ||NJ1||eR||+NJ2||eI||+NJ3||eK||+ˉNJ1||ψR||+ˉNJ2||ψI||+ˉNJ3||ψJ||+ˉNJ4||ψK||, (3.7)
    ||eK||NK1||eR||+NK2||eI||+NK3||eJ||+ˉNK1||ψR||+ˉNK2||ψI||+ˉNK3||ψJ||+ˉNK4||ψK||. (3.8)

    From (3.5)–(3.8), we can easily obtain that

    {||eR||ΛR2||ψR||+ΛR3||ψI||+ΛR4||ψJ||+ΛR5||ψK||,||eI||ΛI2||ψR||+ΛI3||ψI||+ΛI4||ψJ||+ΛI5||ψK||,||eJ||ΛJ2||ψR||+ΛJ3||ψI||+ΛJ4||ψJ||+ΛJ5||ψK||,||eK||ΛK2||ψR||+ΛK3||ψI||+ΛK4||ψJ||+ΛK5||ψK||, (3.9)

    where,

    ΛR1=NR1TR1+NR2TR11+NR3TR9,ΛI1=NI1TI14+NI2TI12+NI3TI10,ΛJ1=NJ1TJ3+NJ3TJ15+NJ2TJ17,ΛK1=NK1TK9+NK2TK10+NK3TK7,
    ΛR2=11ΛR1[ˉNR1ˉDR1+ˉNI1ˉDR2+ˉNJ1ˉDR3+ˉNK1ˉDR4],ΛR3=11ΛR1[ˉNR2ˉDR1+ˉNI2ˉDR2+ˉNJ2ˉDR3+ˉNK2ˉDR4],ΛR4=11ΛR1[ˉNR3ˉDR1+ˉNI3ˉDR2+ˉNJ3ˉDR3+ˉNK3ˉDR4],ΛR5=11ΛR1[ˉNR4ˉDR1+ˉNI4ˉDR2+ˉNJ4ˉDR3+ˉNK4ˉDR4],

    ΛIv,ΛJv,ΛKvv=2,3,4,5 can be defined similarly and ˉDRl,˜DIl,ˇDJl,ˆDKl,(l=1,2,3,4) are given in Appendix B.

    If we choose ||ψR||ϵ1/4ΛR2, ||ψI||ϵ1/4ΛR3, ||ψJ||ϵ1/4ΛR4, ||ψK||ϵ1/4ΛR5. From (3.9), ||eR|| becomes

    ||eR||ϵ1. (3.10)

    Similarly, we can find from (3.9)

    ||eI||ϵ2, ||eJ||ϵ3, ||eK||ϵ4, (3.11)

    where,

    ||ψR||ϵ2/4ΛI2,||ψI||ϵ2/4ΛI3,||ψJ||ϵ2/4ΛI4,||ψK||ϵ2/4ΛI5,||ψR||ϵ3/4ΛJ2,||ψI||ϵ3/4ΛJ3,||ψJ||ϵ3/4ΛJ4, ||ψK||ϵ3/4ΛJ5,||ψR||ϵ4/4ΛK2,||ψI||ϵ4/4ΛK3,||ψJ||ϵ4/4ΛK4,||ψK||ϵ4/4ΛK5.

    From (3.10) and (3.11), we take for all ϵ=max{ϵ1,ϵ2,ϵ3,ϵ4}>0, there exists a ω=ϵ/max{ω1,ω2,ω3,ω4}>0, ω1=max{ΛR2,ΛI2,ΛJ2,ΛK2},ω2=max{ΛR3,ΛI3,ΛJ3,ΛK3}, ω3=max{ΛR4,ΛI4,ΛJ4,ΛK4},ω4=max{ΛR5,ΛI5,ΛJ5,ΛK5}, such that ||e(t)||<ϵ when ||ψ(t)||<ω. Thus, the equilibrium point located in ˉχˉθ,˜χˉθ,ˇχˉθ,ˆχˉθ for the neural networks (2.3) is stable, and its uniqueness can also be guaranteed in the invariant sets ˉχˉθ,˜χˉθ,ˇχˉθ,ˆχˉθ. This completes the proof.

    Remark 3.2. In recent years, among the applications of fractional-order neural networks, multi-stability problems has become a much more interesting topic in the literature. Previously, stability of fractional-order RVNNs or CVNNs has been the attraction of many researchers [50,51,52,53]. In [7], by using the geometrical properties of non-monotonic activation functions, the authors present the problem of multi-stability of recurrent neural networks. Several sufficient conditions were derived to ensure the existence of ni=1(2Ki+1) equilibrium points with (Ki0) in which ni=1(Ki+1) of them are stable. The multi-stability for classical integer-order QVNNs with time delay was considered in [32], by separating the proposed QVNNs into four RVNNs. In this paper, we made an attempt to study several sufficient conditions and are derived to achieve the existence of np=1(2KRp+1),np=1(2KIp+1),np=1(2KJp+1),np=1(2KKp+1) equilibrium points, in which np=1(KRp+1),np=1(KIp+1),np=1(KJp+1),np=1(KKp+1) of them are stable and others are unstable. It is obvious that the number of equilibrium points of QVNNs is much more than that of CVNNs as well as RVNNs and hence it is a generalized one.

    Remark 3.3. According to Assumption 1 and the geometrical properties of activation functions, if KRp=KIp=KJp=KKp=0,ˉσ(0)p=˜σ(0)p=ˇσ(0)p=ˆσ(0)p=, ˉβ(Kp)p=˜β(Kp)p=ˇβ(Kp)p=ˆβ(Kp)p=+, then we obtain only one equilibrium point for the neural networks (2.1) and is globally stable, which is similar to the previous works done.

    In this section, we present two examples to illustrate the theoretical results of this paper.

    Example 4.1. Consider the following two-dimensional fractional-order QVNNs:

    Dυh(t)=Dh(t)+Af(t)+Bg(tτ)+R, (4.1)

    where the order υ=0.98, h(t)=(hT1(t),hT2(t))T, D=diag(1,1) and all the other parameters are A=AR+iAI+jAJ+AK,B=BR+iBI+jBJ+BK,R=(0,0)T, where

    AR=(30.20.66),AI=(0.10.30.020.2),AJ=(0.30.51.70.8),AK=(0.61.50.70.2),BR=(0.0020.040.030.1),BI=(0.020.050.040.1),BJ=(0.20.30.060.06),BK=(0.010.010.0010.12),

    f(t)=fR+ifI+jfJ+kfK,g(tτ)=gR+igI+jgJ+kgK where fR=fI=fJ=fK=tanh(t) and gR=gI=gJ=gK=tanh(tτ),τ=0.6. From Assumption 1, there exists mRp=mIp=mJp=mKp=0.1,MRp=MIp=MJp=MKp=0.1,p=1,2. Based on the given activation functions and from Assumption 1 we choose ˉσ(0)p=˜σ(0)p=ˇσ(0)p=ˆσ(0)p=+,ˉβp(0)=˜βp(0)=ˇβp(0)=ˆβp(0)=2,ˉσ(1)p=˜σ(1)p=ˇσ(1)p=ˆσ(1)p=2,ˉβ(1)p=˜β(1)p=ˇβ(1)p=ˆβ(1)p=+ for p=1,2. It is easy to verify that the condition (2.5) in Lemma 1 are satisfied for Kν1=Kν2=1. Hence, Theorem 1 satisfies the conditions with above parameters and the neural network (4.1) has 2p=1(2KRp+1),2p=1(2KIp+1),2p=1(2KJp+1),2p=1(2KKp+1) equilibrium points in which 2p=1(KRp+1),2p=1(KIp+1),2p=1(KJp+1),2p=1(KKp+1) of them are stable. The simulation results are given in Figures 112 with 100 random initial conditions.

    Figure 1.  This figure depicts the trajectories of state variables hR1(t) in Example 1.
    Figure 2.  This figure depicts the trajectories of state variables hI1(t) in Example 1.
    Figure 3.  This figure depicts the trajectories of state variables hJ1(t) in Example 1.
    Figure 4.  This figure depicts the trajectories of state variables hK1(t) in Example 1.
    Figure 5.  This figure depicts the trajectories of state variables hR2(t) in Example 1.
    Figure 6.  This figure depicts the trajectories of state variables hI2(t) in Example 1.
    Figure 7.  This figure depicts the trajectories of state variables hJ2(t) in Example 1.
    Figure 8.  This figure depicts the trajectories of state variables hK2(t) in Example 1.
    Figure 9.  This figure depicts the phase graph of the state variables hR1(t) and hR2(t) in Example 1.
    Figure 10.  This figure depicts the phase graph of the state variables hI1(t) and hI2(t) in Example 1.
    Figure 11.  This figure depicts the phase graph of the state variables hJ1(t) and hJ2(t) in Example 1.
    Figure 12.  This figure depicts the phase graph of the state variables hK1(t) and hK2(t) in Example 1.

    Example 4.2. Consider the fractional-order (υ=0.9) QVNNs (4.1) with D=diag(1,1)

    AR=(40.30.76.2),BR=(0.460.040.30.1),AI=(0.350.620.20.86),BI=(0.20.050.040.1),AJ=(0.50.70.71.8),BJ=(0.30.30.060.1),AK=(1.81.41.92.2),BK=(0.150.10.30.12),

    the activation functions are fRp=fIp=fJp=fKp=exp(u2). τ=0.6. Hence mRp=mIp=mJp=mKp=0.2,MRp=MIp=MJp=MKp=0.2, for p=1,2. From the activation functions, the chosen points are ˉσ(0)p=˜σ(0)p=ˇσ(0)p=ˆσ(0)p=,ˉβ(0)p=˜β(0)p=ˇβ(0)p=ˆβ(0)p=1.5,ˉσ(1)p=˜σ(1)p=ˇσ(1)p=ˆσ(1)p=1.5, ˉβ(1)p=˜β(1)p=ˇβ(1)p=ˆβ(1)p=+,p=1,2. It is easy to verify that the conditions (2.5) in Lemma 1 are satisfied for Kν1=Kν2=1. Hence, Theorem 1 satisfies the conditions with above parameters and so the considered neural network has 2p=1(2KRp+1),2p=1(2KIp+1),2p=1(2KJp+1),2p=1(2KKp+1) equilibrium points in which 2p=1(KRp+1),2p=1(KIp+1),2p=1(KJp+1),2p=1(KKp+1) of them are stable. The simulation results are given in Figures 1320 with 100 random initial conditions.

    Figure 13.  This figure depicts the trajectories of state variables hR1(t) in Example 2.
    Figure 14.  This figure depicts the trajectories of state variables hR2(t) in Example 2.
    Figure 15.  This figure depicts the trajectories of state variables hI1(t) in Example 2.
    Figure 16.  This figure depicts the trajectories of state variables hI2(t) in Example 2.
    Figure 17.  This figure depicts the trajectories of state variables hJ1(t) in Example 2.
    Figure 18.  This figure depicts the trajectories of state variables hJ2(t) in Example 2.
    Figure 19.  This figure depicts the trajectories of state variables hK1(t) in Example 2.
    Figure 20.  This figure depicts the trajectories of state variables hK2(t) in Example 2.

    For a given set of parameters, multistability refers to the existence of multiple final states that are stable. The initial conditions are critical in determining the convergence of the system to the final state. In this paper, we have investigated the multi-stability analysis for fractional-order QVNNs with delay. By employing the non-commutativity of quaternion multiplications, the QVNNs can be converted into four RVNNs. According to the definition of activation functions, the space of states can be divided into np=1(2KRp+1),np=1(2KIp+1),np=1(2KJp+1),np=1(2KKp+1) subsets. Some sufficient conditions are derived to assure the existence and stability of multiple equilibrium points for the QVNNs. Under these conditions, QVNNs have np=1(2KRp+1),np=1(2KIp+1),np=1(2KJp+1),np=1(2KKp+1) equilibrium points, of which np=1(KRp+1),np=1(KIp+1),np=1(KJp+1),np=1(KKp+1) are stable while the other equilibrium points are unstable. Numerical simulation results have been presented to show efficiency of our theoretical results. In comparision with the previous works of real-valued neural networks and complex-valued neural networks, the extension of neural networks in Quaternion form is a more broader extension and dealing multi-stability with fractional-order with time delays makes it different from the existing works in the literature.

    Future researches can concentrate in multi-stability analysis of fractional-order multiple Quaternion-valued neural networks with time delay. Also, the researchers can also concentrate in developing varied discussion on stability and synchronization analysis of multiple Quaternion-valued neural network.

    The work of Ardak Kashkynbayev was supported by the Science Committee of the Ministry of Education and Science of the Republic of Kazakhastan grant OR11466188 "Dynamical Analysis and Synchronization of Complex Neural Networks and its Applications".

    All the authors declare no conflict of interest.

    In order to facilitate the proof of Theorem 1, we introduce some notations when (l=1,ν=R),(l=2,ν=I),(l=3,ν=J),(l=4,ν=K).

    |s(ψ(l)q)|=nq=1supt(et|ψνq|)eτ,|s(e(l)q)|=nq=1supt(et|eνq|)eτ,|s(ˇe(1)p)|=np=1supt(et|eRp|),|s(ˇe(l)q)|=nq=1supt(et|eνq|),ˉA(l)p=maxqS1(|aRpq|δRνq)+maxqS1(|aIpq|δIνq)+maxqS1(|aJpq|δJνq)+maxqS1(|aKpq|δKνq)+maxqS2(|aRpq|ˉδRνq)+maxqS2(|aIpq|ˉδIνq)+maxqS2(|aJpq|ˉδJνq)+maxqS2(|aKpq|ˉδKνq),
    ˜A(l)p=maxqS1(|aRpq|δIνq)+maxqS1(|aIpq|δRνq)+maxqS1(|aJpq|δKνq)+maxqS1(|aKpq|δJνq)+maxqS2(|aRpq|ˉδIνq)+maxqS2(|aIpq|ˉδRνq)+maxqS2(|aJpq|ˉδKνq)+maxqS2(|aKpq|ˉδJνq),ˇA(l)p=maxqS1(|aRpq|δJνq)+maxqS1(|aIpq|δKνq)+maxqS1(|aJpq|δRνq)+maxqS1(|aKpq|δIνq)+maxqS2(|aRpq|ˉδJνq)+maxqS2(|aIpq|ˉδKνq)+maxqS2(|aJpq|ˉδRνq)+maxqS2(|aKpq|ˉδIνq),ˆA(l)p=maxqS1(|aRpq|δKνq)+maxqS1(|aIpq|δJνq)+maxqS1(|aJpq|δIνq)+maxqS1(|aKpq|δRνq)+maxqS2(|aRpq|ˉδKνq)+maxqS2(|aIpq|ˉδJνq)+maxqS2(|aJpq|ˉδIνq)+maxqS2(|aKpq|ˉδRνq),ˉB(l)p=maxqS1(|bRpq|ηRνq)+maxqS1(|bIpq|ηIνq)+maxqS1(|bJpq|ηJνq)+maxqS1(|bKpq|ηKνq)+maxqS2(|bRpq|ˉηRνq)+maxqS2(|bIpq|ˉηIνq)+maxqS2(|bJpq|ˉηJνq)+maxqS2(|bKpq|ˉηKνq),˜B(l)p=maxqS1(|bRpq|ηIνq)+maxqS1(|bIpq|ηRνq)+maxqS1(|bJpq|ηKνq)+maxqS1(|bKpq|ηJνq)+maxqS2(|bRpq|ˉηIνq)+maxqS2(|bIpq|ˉηRνq)+maxqS2(|bJpq|ˉηKνq)+maxqS2(|bKpq|ˉηJνq),ˇB(l)p=maxqS1(|bRpq|ηJνq)+maxqS1(|bIpq|ηKνq)+maxqS1(|bJpq|ηRνq)+maxqS1(|bKpq|ηIνq)+maxqS2(|bRpq|ˉηJνq)+maxqS2(|bIpq|ˉηKνq)+maxqS2(|bJpq|ˉηRνq)+maxqS2(|bKpq|ˉηIνq),ˆB(l)p=maxqS1(|bRpq|ηKνq)+maxqS1(|bIpq|ηJνq)+maxqS1(|bJpq|ηIνq)+maxqS1(|bKpq|ηRνq)+maxqS2(|bRpq|ˉηKνq)+maxqS2(|bIpq|ˉηJνq)+maxqS2(|bJpq|ˉηIνq)+maxqS2(|bKpq|ˉηRνq),I1=e(ts)es(ts)1υ,I2=e(ts+τ)e(sτ)(ts)1υ,|˜e(1)p|=supt(et|eRp|)1Γ(υ)t0u(υ1)eudu,|˜e(l)q|=nq=1supt(et|eνq|)1Γ(υ)t0u(υ1)eudu,|ˉψ(l)q|=1Γ(υ)0τ(tϱτ)(υ1)e(tϱ)eϱ|ψνq|du,|ˉe(l)q|=1Γ(υ)tτ0(tϱτ)(υ1)e(tϱ)eϱ|eνq|du,|ˆe(l)q|=|s(e(l)q)|1Γ(υ)ttτξ(υ1)eξdξ,|ˆψ(l)q|=|s(ψ(l)q)|1Γ(υ)ttτξ(υ1)eξdξ,MRl=np=1(ˉA(l)p+ˉB(l)p),ˉMRl=np=1ˉB(l)p,MIl=np=1(˜A(l)p+˜B(l)p),ˉMIl=np=1˜B(l)p,MJl=np=1(ˇA(l)p+ˇB(l)p),ˉMJl=np=1ˇB(l)p,MKl=np=1(ˆA(l)p+ˆB(l)p),ˉMKl=np=1ˆB(l)p,MR=1maxp(dp)+MR1,MI=1maxp(dp)+MI2,MJ=1maxp(dp)+MJ3,MK=1maxp(dp)+MK4,NR1=MR2MR,NR2=MR3MR,NR3=MR4MR,NI1=MI1MI,NI2=MI3MI,NI3=MI4MI,NJ1=MJ1MJ,NJ2=MJ2MJ,NJ3=MJ4MJ,NK1=MK1MK,NK2=MK2MK,NK3=MK3MK,ˉNνl=ˉMνlMν,TR1=NJ1+NJ3NK11NJ3NK3,TR2=NJ2+NJ3NK21NJ3NK3,TR3=NJ3NK+NJ1NJ3NK3,TR4=(NK1+NK3T1),TR5=NK2+NK3T2,TR6=NK3T3+NK,TR7=NI1+NI2T1+NI3T41NI2T2+NI3T5,TR8=NI2T3+NI3T6+NI1NI2T2+NI3T5,TR9=T4+T5T7,TR10=T5T8+T6,TR11=T1+T2T7,TR12=T2T8+T3,TI1=NR1+NR2NJ21NR2NJ1,TI2=NR2NJ31NR2NJ1,TI3=NJNR2+NR1NR2NJ1,TI4=NJ1TI1+NJ2,TI5=NJ1TI2+NJ3,TI6=NJ1TI3+NJ,TI7=NK1TI1+NK2+NK3TI4,TI8=NK1TI2+NK3TI5,TI9=NK1TI3+NK3TI6+NK,TI10=TI11TI8,TI11=TI91TI8,TI12=TI4+TI5TI10,TI13=TI5TI11+TI6,TI14=TI1+TI2TI10,TI15=TI2TI11+TI3,TJ1=NI2+NI1NR21NI1NR1,TJ2=NI1NR3+NI31NI1NR1,TJ3=NI1NR+NI1NI1NR1,TJ4=NK11NK2TJ2,TJ5=NK2TJ1+NK31NK2TJ2,TJ6=TJ3NK2+NK1NK2TJ2,TJ7=TJ2TJ4,TJ8=TJ1+TJ2TJ5,TJ9=TJ2TJ6+TJ3,
    TJ10=NR1TJ7+NR3TJ4,TJ11=NR1TJ8+NR3TJ5+NR2,TJ12=NR1TJ9+NR3TJ6,TJ13=TJ111TJ10,TJ14=TJ121TJ10,TJ15=TJ4TJ13+TJ5,TJ16=TJ4TJ14+TJ6,TJ17=TJ1+TJ2TJ15,TJ18=TJ2TJ16+TJ3,TK1=NR1NI2+NR21NR1NI1,TK2=NR3+NR1NI31NR1NI1,TK3=NR1NI+NR1NR1NI1,TK4=NI1TK1+NI1,TK5=NI1TK2+NI3,TK6=NI1TK3+NI,TK7=NJ1TK2+NJ2TK5+NJ31NJ1TK1+NJ2TK4,TK8=NJ1TK3+NJ2TK6+NJ1NJ1TK1+NJ2TK4.
    ˉDR1=1+NK2NI3NI2NJ2+NK3NI3NJ22NK3NJ3NK2NI2NJ3NK2NK3NI3NJ3+NK3NI2NJ2NJ3NK23NI3NJ2NJ3+NK23NJ23+NK2NK3NI2NJ23,˜DI1=NI1+NK1NI3+NI2NJ1+NK3NI3NJ1NK3NI1NJ3+NK1NI2NJ3,ˇDJ1=NR1NI1NJ1NK2NR3NI1NJ1+NK1NR1NR3NI1NJ1+NK1NK2NR23NI1NJ1+NR21NI21NJ1+2NK2NR1NR3NI21NJ1+NK22NR23NI21NJ1+NK2NR1NI1NI3NJ1+NK1NR21NI1NI3NJ1+NK22NR3NI1NI3NJ1+NK1NK2NR1NR3NI1NI3NJ1NI1NJ2+NK1NR3NI1NJ2+NR1NI21NJ2+NK2NR3NI21NJ2NK1NR1NR3NI21NJ2NK1NK2NR23NI21NJ2+NK2NI1NI3NJ2NK1NK2NR3NI1NI3NJ2NK2NI1NJ3NK1NR1NI1NJ3+NK2NR1NI21NJ3+NK1NR21NI21NJ3+NK22NR3NI21NJ3+NK1NK2NR1NR3NI21NJ3+NK22NI1NI3NJ3+NK1NK2NR1NI1NI3NJ3,ˆDK1=1+NI1NR1NI1NR1NI21+NK3NJ1NR2NJ1+NK1NR2NJ1NK3NR1NI1NJ1NR2NI1NJ1+NK2NR2NI1NJ1+NK2NI2NJ1NR1NI2NJ1+NK1NR1NI2NJ1NR1NI1NI2NJ1+NK3NI1NJ2+NR2NI1NJ2+NK1NR2NI1NJ2NK3NR1NI21NJ2+NR2NI21NJ2+NK2NR2NI21NJ2+NI2NJ2+NI1NI2NJ2+NK2NI1NI2NJ2+NK1NR1NI1NI2NJ2,ˉDR2=NR1+NK2NR3+NR2NJ2+NK3NR3NJ22NK3NR1NJ3+NK2NR2NJ3NK2NK3NR3NJ3NK3NR2NJ2NJ3NK23NR3NJ2NJ3+NK23NR1NJ23NK2NK3NR2NJ23,˜DI2=1NR2NJ1NK3NJ3NK1NR2NJ3,
    ˇDJ2=NR1NJ1NK2NR3NJ1+NK1NR1NR3NJ1+NK1NK2NR23NJ1+NR21NI1NJ1+2NK2NR1NR3NI1NJ1+NK22NR23NI1NJ1+NK2NR1NI3NJ1+NK1NR21NI3NJ1+NK22NR3NI3NJ1+NK1NK2NR1NR3NI3NJ1NJ2+NK1NR3NJ2+NR1NI1NJ2+NK2NR3NI1NJ2NK1NR1NR3NI1NJ2NK1NK2NR23NI1NJ2+NK2NI3NJ2NK1NK2NR3NI3NJ2NK2NJ3NK1NR1NJ3+NK2NR1NI1NJ3+NK1NR21NI1NJ3+NK22NR3NI1NJ3+NK1NK2NR1NR3NI1NJ3+NK22NI3NJ3+NK1NK2NR1NI3NJ3,ˆDK2=1+NR1NR21NI1NR21NI21+NK3NR1NJ1NR2NJ1NR1NR2NJ1+NK1NR1NR2NJ1NK3NR21NI1NJ1NR1NR2NI1NJ1+NK2NR1NR2NI1NJ1NR1NI2NJ1+NK2NR1NI2NJ1NR21NI2NJ1+NK1NR21NI2NJ1NR21NI1NI2NJ1+NK3NJ2+NK1NR2NJ2+NR2NI1NJ2+NK2NR2NI1NJ2+NR1NR2NI1NJ2+NK1NR1NR2NI1NJ2NK3NR21NI21NJ2+NR1NR2NI21NJ2+NK2NR1NR2NI21NJ2+NI2NJ2+NK2NI2NJ2+NR1NI2NJ2+NK1NR1NI2NJ2+NR1NI1NI2NJ2+NK2NR1NI1NI2NJ2+NK1NR21NI1NI2NJ2,ˉDR3=NR2+NK3NR3+NR1NI2+NK2NR3NI2+NK3NR1NI3+NK2NR2NI3+2NK2NK3NR3NI3+2NK3NR2NI3NJ2+2NK23NR3NI3NJ2NK3NR2NJ3NK23NR3NJ3NK3NR1NI2NJ3NK2NK3NR3NI2NJ3NK23NR1NI3NJ3+NK2NK3NR2NI3NJ3,˜DI3=NR2NI1+NI2+NK3NI3+NK1NR2NI3,ˇDJ3=1+NK1NR3+2NR1NI1+2NK2NR3NI1NK1NR1NR3NI1NK1NK2NR23NI1NR21NI212NK2NR1NR3NI21NK22NR23NI21+2NK2NI3+NK1NR1NI3NK1NK2NR3NI32NK2NR1NI1NI3NK1NR21NI1NI32NK22NR3NI1NI3NK1NK2NR1NR3NI1NI3NK22NI23NK1NK2NR1NI23,ˆDK3=NK3+NK1NR2NK3NR1NI1+NK2NR2NI1+NK2NI2+NK1NR1NI2,ˉDR4=NR3+NR1NI3+2NK2NR3NI3NR3NI2NJ2+NR2NI3NJ2+2NK3NR3NI3NJ2+NR2NJ3NK3NR3NJ3+NR1NI2NJ3NK3NR1NI3NJ3+2NK2NR2NI3NJ3+NK3NR3NI2NJ2NJ3+NK3NR2NI3NJ2NJ3NK3NR2NJ23NK3NR1NI2NJ23,˜DI4=NI3NR2NI3NJ1+NR2NI1NJ3+NI2NJ3,ˇDJ4=NR3NJ1+NK1NR23NJ1+NR1NR3NI1NJ1+NK2NR23NI1NJ1NR1NI3NJ1+NK2NR3NI3NJ1+2NK1NR1NR3NI3NJ1+NR21NI1NI3NJ1+NK2NR1NR3NI1NI3NJ1+NK2NR1NI23NJ1+NK1NR21NI23NJ1NR3NI1NJ2+NR1NR3NI21NJ2+NK2NR23NI21NJ2NI3NJ2+NR1NI1NI3NJ2+2NK2NR3NI1NI3NJ2+NK2NI23NJ2NJ3+2NR1NI1NJ3+NK2NR3NI1NJ3NR21NI21NJ3NK2NR1NR3NI21NJ3+NK2NI3NJ3NK2NR1NI1NI3NJ3,ˆDK4=1NR1NI1NR2NJ1NR1NI2NJ1+NR2NI1NJ2+NI2NJ2.


    [1] H. Wang, Y. Yu, G. Wen, S. Zhang, J. Yu, Global stability analysis of fractional-order Hopfield neural networks with time delay, Neurocomputing, 154 (2015), 15–23. doi: 10.1016/j.neucom.2014.12.031. doi: 10.1016/j.neucom.2014.12.031
    [2] R. Rakkiyappan, J. Cao, G. Velmurugan, Existence and uniform stability analysis of fractional-order complex-valued neural networks with time delays, IEEE Trans. Neural Netw. Learn. Syst., 26 (2015), 84–97. doi: 10.1109/TNNLS.2014.2311099. doi: 10.1109/TNNLS.2014.2311099
    [3] R. Rakkiyappan, K. Udhayakumar, G. Velmurugan, J. Cao, A. Alsaedi, Stability and Hopf bifurcation analysis of fractional-order complex-valued neural networks with time delays, Adv. Differ. Equ., 1 (2017), 1–25. doi: 10.1186/s13662-017-1266-3. doi: 10.1186/s13662-017-1266-3
    [4] G. Velmurugan, R. Rakkiyappan, J. Cao, Finite-time synchronization of fractional-order memristor-based neural networks with time delays, Neural Netw., 73 (2016), 36–46. doi: 10.1016/j.neunet.2015.09.012. doi: 10.1016/j.neunet.2015.09.012
    [5] A. Abdurahman, H. Jiang, Z. Teng, Finite-time synchronization for memristor-based neural networks with time-varying delays, Neural Netw., 69 (2015), 20–28. doi: 10.1016/j.neunet.2015.04.015. doi: 10.1016/j.neunet.2015.04.015
    [6] J. Cao, M. Xiao, Stability and Hopf bifurcation in a simplified BAM neural network with two time delays, IEEE Trans. Neural Netw. Learn. Syst., 18 (2007), 416–430. doi: 10.1109/TNN.2006.886358. doi: 10.1109/TNN.2006.886358
    [7] P. Liu, Z. Zeng, J. Wang, Multiple mittag–leffler stability of fractional-order recurrent neural networks, IEEE Trans. Syst. Man Cybern. Syst., 47 (2017), 2279–2288. doi: 10.1109/TSMC.2017.2651059. doi: 10.1109/TSMC.2017.2651059
    [8] A. Wu, Z. Zeng, Global Mittag-Leffler stabilization of fractional-order memristive neural networks, IEEE Trans. Neural Netw. Learn. Syst., 28 (2015), 206–217. doi: 10.1109/TNNLS.2015.2506738. doi: 10.1109/TNNLS.2015.2506738
    [9] X. Yang, C. Li, Q. Song, J. Chen, J. Huang, Global Mittag-Leffler stability and synchronization analysis of fractional-order quaternion-valued neural networks with linear threshold neurons, Neural Netw., 105 (2018), 88–103. doi: 10.1016/j.neunet.2018.04.015. doi: 10.1016/j.neunet.2018.04.015
    [10] Z. Sabir, M. A. Raja, M. Shoaib, J. G. Aguilar, FMNEICS: Fractional Meyer neuro-evolution-based intelligent computing solver for doubly singular multi-fractional order Lane–Emden system, Comput. Appl. Math., 39 (2020), 303. doi: 10.1007/s40314-020-01350-0. doi: 10.1007/s40314-020-01350-0
    [11] Z. Sabir, M. A. Raja, J. L. Guirao, M. Shoaib, A novel design of fractional Meyer wavelet neural networks with application to the nonlinear singular fractional Lane-Emden systems, Alex. Eng. J., 60 (2021), 2641–2659. doi: 10.1016/j.aej.2021.01.004. doi: 10.1016/j.aej.2021.01.004
    [12] Z. Sabir, M. A. Raja, J. L. Guirao, T. Saeed, Meyer wavelet neural networks to solve a novel design of fractional order pantograph Lane-Emden differential model, Chaos Soliton. Fract., 152 (2021), 111404. doi: 10.1016/j.chaos.2021.111404. doi: 10.1016/j.chaos.2021.111404
    [13] Z. Sabir, M. A. Raja, J. L. Guirao, T. Saeed, Solution of novel multi-fractional multi-singular Lane-Emden model using the designed FMNEICS, Neural Comput. Appl., 33 (2021), 17287–17302. doi: 10.1007/s00521-021-06318-7. doi: 10.1007/s00521-021-06318-7
    [14] Z. Sabir, M. A. Raja, D. Baleanu, Fractional Mayer Neuro-swarm heuristic solver for multi-fractional Order doubly singular model based on Lane-Emden equation, Fractals, 29 (2021), 2140017. doi: 10.1142/S0218348X2140017X. doi: 10.1142/S0218348X2140017X
    [15] A. Elsonbaty, Z. Sabir, R. Ramaswamy, W. Adel, Dynamical Analysis of a novel discrete fractional SITRS model for COVID-19, Fractals, 29 (2021), 2140035. doi: 10.1142/S0218348X21400351. doi: 10.1142/S0218348X21400351
    [16] E. Kaslik, I. R. Rǎdulescu, Dynamics of complex-valued fractional-order neural networks, Neural Netw., 89 (2017), 39–49. doi: 10.1016/j.neunet.2017.02.011. doi: 10.1016/j.neunet.2017.02.011
    [17] I. Podlubny, Fractional differential equations, California: Academic Press, 1999.
    [18] A. Aghili, Complete solution for the time fractional diffusion problem with mixed boundary conditions by operational method, Appl. Math. Nonlinear Sci., 6 (2020), 9–20. doi: 10.2478/AMNS.2020.2.00002. doi: 10.2478/AMNS.2020.2.00002
    [19] K. A. Touchent, Z. Hammouch, T. Mekkaoui, A modified invariant subspace method for solving partial differential equations with non-singular kernel fractional derivatives, Appl. Math. Nonlinear Sci., 5 (2020), 35–48. doi: 10.2478/amns.2020.2.00012. doi: 10.2478/amns.2020.2.00012
    [20] M. Onal, A. Esen, A Crank-Nicolson approximation for the time fractional Burgers equation, Appl. Math. Nonlinear Sci., 5 (2020), 177–184. doi: 10.2478/amns.2020.2.00023. doi: 10.2478/amns.2020.2.00023
    [21] A. O. Akdemir, E. Deniz, E. Yüksel, On Some integral inequalities via conformable fractional integrals, Appl. Math. Nonlinear Sci., 6 (2021), 489–498. doi: 10.2478/amns.2020.2.00071. doi: 10.2478/amns.2020.2.00071
    [22] M. Gürbüz, C. Yildiz, Some new inequalities for convex functions via Riemann-Liouville fractional integrals, Appl. Math. Nonlinear Sci., 6 (2021) 537–544. doi: 10.2478/amns.2020.2.00015.
    [23] S. Arik, Global asymptotic stability of a class of dynamical neural networks, IEEE Trans. Circuits Syst. I, Fundam. Theory Appl., 47 (2000), 568–571. doi: 10.1109/81.841858. doi: 10.1109/81.841858
    [24] H. Zhang, E. Fata, S. Sundaram, A notion of robustness in complex networks, IEEE T. Control Netw., 2 (2015), 310–320. doi: 10.1109/TCNS.2015.2413551. doi: 10.1109/TCNS.2015.2413551
    [25] Y. Huang, H. Zhang, Z. Wang, Multi-stability and multi-periodicity of delayed biderectional associative memory neural networks with discontinuous activation functions, Appl. Math. Comput., 219 (2012), 899–910. doi: 10.1016/j.amc.2012.06.068. doi: 10.1016/j.amc.2012.06.068
    [26] R. Rakkiyappan, G. Velmurugan, J. Cao, Finite-time stability analysis of fractional-order complex-valued memristor-based neural networks with time delays, Nonlinear Dynam., 78 (2014), 2823–2836. doi: 10.1007/s11071-014-1628-2. doi: 10.1007/s11071-014-1628-2
    [27] G. Velmurugan, R. Rakkiyappan, V. Vembarasan, J. Cao, A. Alsaedi, Dissipativity and stability analysis of fractional-order complex-valued neural networks with time delay, Neural Netw., 86 (2017), 42–53. doi: 10.1016/j.neunet.2016.10.010. doi: 10.1016/j.neunet.2016.10.010
    [28] X. Chen, Q. Song, Z. Li, Z. Zhao, Y. Liu, Stability analysis of continuous-time and discrete-time quaternion-valued neural networks with linear threshold neurons, IEEE Trans. Neural Netw. Learn. Syst., 29 (2018), 2769–2781. doi: 10.1109/TNNLS.2017.2704286. doi: 10.1109/TNNLS.2017.2704286
    [29] J. Hu, C. Zeng, J. Tan, Boundedness and periodicity for linear threshold discrete-time quaternion-valued neural network with time-delays, Neurocomputing, 267 (2017), 417–425. doi: 10.1016/j.neucom.2017.06.047. doi: 10.1016/j.neucom.2017.06.047
    [30] N. Li, W. X. Zheng, Passivity analysis for quaternion-valued memristor-based neural networks with time-varying delay, IEEE Trans. Neural Netw. Learn. Syst., 31 (2020), 639–650. doi: 10.1109/TNNLS.2019.2908755. doi: 10.1109/TNNLS.2019.2908755
    [31] Y. Liu, D. Zhang, J. Lu, Global exponential stability for quaternion-valued recurrent neural networks with time-varying delay, Nonlinear Dynam., 87 (2017), 553–565. doi: 10.1007/s11071-016-3060-2. doi: 10.1007/s11071-016-3060-2
    [32] Q. Song, X. Chen, Multistability analysis of quaternion-valued neural networks with time delay, IEEE Trans. Neural Netw. Learn. Syst., 29 (2017), 5430–5440. doi: 10.1109/TNNLS.2018.2801297. doi: 10.1109/TNNLS.2018.2801297
    [33] Y. Liu, D. Zhang, J. Lou, J. Lu, J. Cao, Stability analysis of quaternion-valued neural networks: Decomposition and direct approaches, IEEE Trans. Neural Netw. Learn. Syst., 29 (2018), 4201–4211. doi: 10.1109/TNNLS.2017.2755697. doi: 10.1109/TNNLS.2017.2755697
    [34] Y. Liu, D. Zhang, J. Lu, J. Cao, Global μ-stability criteria for quaternion-valued neural networks with unbounded time-varying delays, Inform. Sciences, 360 (2016), 273–288. doi: 10.1016/j.ins.2016.04.033. doi: 10.1016/j.ins.2016.04.033
    [35] Z. Tu, J. Cao, A. Alsaedi, T. Hayat, Global dissipativity analysis for delayed quaternion-valued neural networks, Neural Netw., 89 (2017), 97–104. doi: 10.1016/j.neunet.2017.01.006. doi: 10.1016/j.neunet.2017.01.006
    [36] S. Zhang, Y. Yu, J. Yu, LMI conditions for global stability of fractional-order neural networks, IEEE Trans. Neural Netw. Learn. Syst., 28 (2017), 2423–2433. doi: 10.1109/TNNLS.2016.2574842. doi: 10.1109/TNNLS.2016.2574842
    [37] M. Ogura, V. M. Preciado, Stability of SIS Spreading Processes in Networks with Non-Markovian Transmission and Recovery, IEEE T. Control Netw., 7 (2020), 349–359. doi:10.1109/TCNS.2019.2905131. doi: 10.1109/TCNS.2019.2905131
    [38] F. Li, Stability of Boolean networks with delays using pinning control, IEEE T. Control Netw., 5 (2018), 179–185. doi: 10.1109/TCNS.2016.2585861. doi: 10.1109/TCNS.2016.2585861
    [39] F. Zhang, Z. Zeng, Multiple ψ-type stability of Cohen-Grossberg neural networks with unbounded time-varying delays, IEEE Trans. Syst. Man Cybern. Syst., 51 (2018), 521–531. doi: 10.1109/TSMC.2018.2876003. doi: 10.1109/TSMC.2018.2876003
    [40] F. Zhang, Z. Zeng, Multistability and instability analysis of recurrent neural networks with time-varying delays, Neural Netw., 97 (2018), 116–126. doi: 10.1016/j.neunet.2017.09.013. doi: 10.1016/j.neunet.2017.09.013
    [41] I. Karafyllis, M. Papageorgiou, Global exponential stability for discrete-time networks with applications to traffic networks, IEEE T. Control Netw., 2 (2015), 68–77. doi: 10.1109/TCNS.2014.2367364. doi: 10.1109/TCNS.2014.2367364
    [42] Z. Zeng, T. Huang, W. X. Zheng, Multistability of recurrent neural networks with time-varying delays and the piecewise linear activation function, IEEE Trans. Neural Netw. Learn. Syst., 21 (2010), 1371–1377. doi: 10.1109/TNN.2010.2054106. doi: 10.1109/TNN.2010.2054106
    [43] H. L. Li, L. Zhang, C. Hu, Y. L. Jiang, Z. Teng, Dynamical analysis of a fractional-order predator-prey model incorporating a prey refuge, J. Appl. Math. Comput., 54 (2017), 435–449. doi: 10.1007/s12190-016-1017-8. doi: 10.1007/s12190-016-1017-8
    [44] R. Wei, J. Cao, Fixed-time synchronization of quaternion-valued memristive neural networks with time delays, Neural Netw., 113 (2019), 1–10. doi: 10.1016/j.neunet.2019.01.014. doi: 10.1016/j.neunet.2019.01.014
    [45] H. L. Li, C. Hu, L. Zhang, H. Jiang, J. Cao, Non-separation method-based robust finite-time synchronization of uncertain fractional-order quaternion-valued neural networks, Appl. Math. Comput., 409 (2021), 126377. doi: 10.1016/j.amc.2021.126377. doi: 10.1016/j.amc.2021.126377
    [46] H. L. Li, H. Jiang, J. Cao, Global synchronization of fractional-order quaternion-valued neural networks with leakage and discrete delays, Neurocomputing, 385 (2020), 211–219. doi: 10.1016/j.neucom.2019.12.018. doi: 10.1016/j.neucom.2019.12.018
    [47] P. Liu, Z. Zeng, J. Wang, Multistability of recurrent neural networks with non-monotonic activation functions and unbounded time-varying delays, IEEE Trans. Neural Netw. Learn. Syst., 29 (2018), 3000–3010. doi: 10.1109/TSMC.2015.2461191. doi: 10.1109/TSMC.2015.2461191
    [48] Z. Zeng, W. X. Zheng. Multistability of neural networks with time-varying delays and concave-convex characteristics, IEEE Trans. Neural Netw. Learn. Syst., 23 (2012), 293–305. doi: 10.1109/TNNLS.2011.2179311. doi: 10.1109/TNNLS.2011.2179311
    [49] A. Kilbas, H. Srivastava, J. Trujillo, Theory and applications of fractional differential equations, The Netherlands: Elsevier, 2006.
    [50] I. Stamova, Global mittag-Leffler stability and synchronization of impulsive fractional-order neural networks with time-varying delays, Nonlinear Dynam., 77 (2014), 1251–1260. doi: 10.1007/s11071-014-1375-4. doi: 10.1007/s11071-014-1375-4
    [51] S. Tyagi, S. Abbas, M. Hafayed, Global Mittag-Leffler stability of complex valued fractional-order neural network with discrete and distributed delays, Rend. Circ. Mat. Palerm., 65 (2016), 485–505. doi: 10.1007/s12215-016-0248-8. doi: 10.1007/s12215-016-0248-8
    [52] J. Zhang, Y. Xiang, N. Wang, Multiple Mittag-Leffler stability of fractional-order Cohen-Grossberg neural networks with non-monotonic piecewise linear activation functions, In: 2018 Tenth International Conference on Advanced Computational Intelligence (ICACI), 2018,520–527. doi: 10.1109/ICACI.2018.8377513.
    [53] S. Zhang, Y. Yu, H. Wang, Mittag-Leffler stability of fractional-order Hopfield neural networks, Nonlinear Anal-Hybri., 16 (2015), 104–121. doi: 10.1016/j.nahs.2014.10.001. doi: 10.1016/j.nahs.2014.10.001
  • This article has been cited by:

    1. Ningning Zhao, Yuanhua Qiao, Chuanqing Xu, New results on synchronization control of memristor-based quaternion-valued fuzzy neural networks with delayed impulses, 2024, 484, 01650114, 108940, 10.1016/j.fss.2024.108940
    2. P. Gokul, G. Soundararajan, Ardak Kashkynbayev, R. Rakkiyappan, Finite-time contractive stability for fractional-order nonlinear systems with delayed impulses: Applications to neural networks, 2024, 610, 09252312, 128599, 10.1016/j.neucom.2024.128599
    3. Shiv Shankar Chouhan, Subir Das, Xiaofeng Chen, Coexistence of locally multistable equilibrium points for n-neuron delayed quaternion-valued neural networks with continuous piecewise nonlinear activation functions, 2024, 594, 09252312, 127868, 10.1016/j.neucom.2024.127868
    4. Li Zhu, Er-yong Cong, Xian Zhang, Global exponential stability conditions for quaternion-valued neural networks with leakage, transmission and distribution delays, 2023, 8, 2473-6988, 19018, 10.3934/math.2023970
    5. Chenxi Song, Sitian Qin, Zhigang Zeng, Multiple Mittag–Leffler Stability of Almost Periodic Solutions for Fractional-Order Delayed Neural Networks: Distributed Optimization Approach, 2025, 36, 2162-237X, 569, 10.1109/TNNLS.2023.3328307
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2294) PDF downloads(114) Cited by(5)

Figures and Tables

Figures(20)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog