Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

Multimode function multistability of Cohen-Grossberg neural networks with Gaussian activation functions and mixed time delays

  • This paper explores multimode function multistability of Cohen-Grossberg neural networks (CGNNs) with Gaussian activation functions and mixed time delays. We start by using the geometrical properties of Gaussian functions. The state space is partitioned into 3μ subspaces, where 0μn. Moreover, through the utilization of Brouwer's fixed point theorem and contraction mapping, some sufficient conditions are acquired to ensure the existence of precisely 3μ equilibria for n-dimensional CGNNs. Meanwhile, there are 2μ and 3μ2μ multimode function stable and unstable equilibrium points, respectively. Ultimately, two illustrative examples are provided to confirm the efficacy of theoretical results.

    Citation: Jiang-Wei Ke, Jin-E Zhang, Ji-Xiang Zhang. Multimode function multistability of Cohen-Grossberg neural networks with Gaussian activation functions and mixed time delays[J]. AIMS Mathematics, 2024, 9(2): 4562-4586. doi: 10.3934/math.2024220

    Related Papers:

    [1] Ruoyu Wei, Jinde Cao, Wenhua Qian, Changfeng Xue, Xiaoshuai Ding . Finite-time and fixed-time stabilization of inertial memristive Cohen-Grossberg neural networks via non-reduced order method. AIMS Mathematics, 2021, 6(7): 6915-6932. doi: 10.3934/math.2021405
    [2] Yijia Zhang, Tao Xie, Yunlong Ma . Robustness analysis of exponential stability of Cohen-Grossberg neural network with neutral terms. AIMS Mathematics, 2025, 10(3): 4938-4954. doi: 10.3934/math.2025226
    [3] Biwen Li, Yibo Sun . Stability analysis of Cohen-Grossberg neural networks with time-varying delay by flexible terminal interpolation method. AIMS Mathematics, 2023, 8(8): 17744-17764. doi: 10.3934/math.2023906
    [4] Pratap Anbalagan, Evren Hincal, Raja Ramachandran, Dumitru Baleanu, Jinde Cao, Chuangxia Huang, Michal Niezabitowski . Delay-coupled fractional order complex Cohen-Grossberg neural networks under parameter uncertainty: Synchronization stability criteria. AIMS Mathematics, 2021, 6(3): 2844-2873. doi: 10.3934/math.2021172
    [5] Shuting Chen, Ke Wang, Jiang Liu, Xiaojie Lin . Periodic solutions of Cohen-Grossberg-type Bi-directional associative memory neural networks with neutral delays and impulses. AIMS Mathematics, 2021, 6(3): 2539-2558. doi: 10.3934/math.2021154
    [6] Qinghua Zhou, Li Wan, Hongshan Wang, Hongbo Fu, Qunjiao Zhang . Exponential stability of Cohen-Grossberg neural networks with multiple time-varying delays and distributed delays. AIMS Mathematics, 2023, 8(8): 19161-19171. doi: 10.3934/math.2023978
    [7] Tao Xie, Wenqing Zheng . Robustness analysis of Cohen-Grossberg neural network with piecewise constant argument and stochastic disturbances. AIMS Mathematics, 2024, 9(2): 3097-3125. doi: 10.3934/math.2024151
    [8] Mohammed D. Kassim . A fractional Halanay inequality for neutral systems and its application to Cohen-Grossberg neural networks. AIMS Mathematics, 2025, 10(2): 2466-2491. doi: 10.3934/math.2025115
    [9] Zhiying Li, Wei Liu . The anti-periodic solutions of incommensurate fractional-order Cohen-Grossberg neural network with inertia. AIMS Mathematics, 2025, 10(2): 3180-3196. doi: 10.3934/math.2025147
    [10] Hongmei Zhang, Xiangnian Yin, Hai Zhang, Weiwei Zhang . New criteria on global Mittag-Leffler synchronization for Caputo-type delayed Cohen-Grossberg Inertial Neural Networks. AIMS Mathematics, 2023, 8(12): 29239-29259. doi: 10.3934/math.20231497
  • This paper explores multimode function multistability of Cohen-Grossberg neural networks (CGNNs) with Gaussian activation functions and mixed time delays. We start by using the geometrical properties of Gaussian functions. The state space is partitioned into 3μ subspaces, where 0μn. Moreover, through the utilization of Brouwer's fixed point theorem and contraction mapping, some sufficient conditions are acquired to ensure the existence of precisely 3μ equilibria for n-dimensional CGNNs. Meanwhile, there are 2μ and 3μ2μ multimode function stable and unstable equilibrium points, respectively. Ultimately, two illustrative examples are provided to confirm the efficacy of theoretical results.



    Essentially, Cohen-Grossberg neural networks (CGNNs) are a sort of artificial feedback neural networks, which means they exhibit common characteristics with other artificial neural networks in terms of information transfer and feedback mechanisms. CGNNs encompass highly adaptable neural network models (see, e.g., [1,2,3]), incorporating various types shaped like Hopfield neural networks and cellular neural networks, so the dynamic properties of multitudinous neural networks can be considered simultaneously when studying CGNNs. In addition, CGNNs offer extensive application prospects across various fields, including pattern recognition, classification, associative memory (see, e.g., [4,5,6]), etc. Stability is a prerequisite for the effectiveness of CGNNs in these applications. Thus, in order to get a larger capacity, CGNNs are designed with multiple stable equilibrium points. This has attracted many researchers to explore the multistability of CGNNs.

    Actually, multistability analysis problems are typically more challenging than single-stability analysis, in which the phase space needs to be effectively partitioned into subsets containing equilibrium points according to different types of activation functions. By dividing the state space, the dynamics of multiple equilibrium points in each subset can be studied. Naturally, there are valuable works addressing this issue (see, e.g., [7,8,9,10,11]). In [9], Liu et al. investigated multistability in fractional-order recurrent neural networks by exploiting an activation function and nonsingular M-matrix, and they concluded that there exist ni=1(2Ki+1) equilibria, among which, ni=1(Ki+1) equilibria are local Mittag-Leffler stable. In [11], the authors explored multistability of CGNNs with non-monotonic activation functions and time-varying delays, and they found that one can obtain (2K+1)n equilibria for n-neuron CGNNs, with (K+1)n of them being exponentially stable. In addition, Wan and Liu [12] studied the multiple O(tq) stability of fractional-order CGNNs with Gaussian activation functions.

    To the best of our knowledge, it is agreed that the number of equilibrium points in multistability analysis of neural networks is intimately connected with the types of activation function. Some activation functions utilized widely in the existing literature are saturation function, Gaussian function, sigmoid function, Mexican-hat function [13], etc. Among these functions, a Gaussian function endows neural networks with greater modeling power and adaptability due to its properties of being nonmonotonic, bounded, symmetric, strongly nonlinear, and nonnegative. Additionally, research has conclusively shown that employing Gaussian activation functions in neural networks can accelerate learning and improve prediction (see, e.g., [14,15]). As such, it is indispensable to analyze the dynamical behavior of neural networks introducing Gaussian functions. In the literature related to Gaussian functions, Liu et al. [16] addressed the stable issue of recurrent neural networks with Gaussian activation functions by analyzing geometric properties of the Gaussian function. Their results concluded that there exist exactly 3k equilibrium points, and 2k equilibria are locally exponentially stable, while 3k2k equilibria are unstable. In [17], the dynamical behaviors of multiple equilibria for fractional-order competitive neural networks with Gaussian activation functions were explored.

    Due to the limited switching speeds and constrained signal propagation rates of neural amplifiers, it is imperative not to neglect time delays in neural networks (see, e.g., [18,19,20]). In fact, for some neurons, discrete-time delays offer a well-approximated and simplified circuit model for representing delay feedback systems. It is worth noting that neural networks typically exhibit spatial expansion since they consist of numerous parallel pathways with varying axon sizes and lengths. In such cases, the transmission of signals is not transient anymore and cannot be adequately characterized only by discrete-time delays. That is, it is reasonable to include distributed time delays in neural networks, which can reveal more realistically characteristics of neurons in the human brain (see, e.g., [21,22,23]). Therefore, we should be dedicated to analyzing CGNNs with mixed time delays, which is also highly necessary.

    Nowadays, there are several frequently mentioned types of stability, such as asymptotic stability (see, e.g., [24,25]), exponential stability (see, e.g., [26,27]), logarithmic stability and polynomial stability [28]. In general, differences in stability indicate different convergence paradigms, allowing systems to satisfy the corresponding evolutionary requirements. Recently, a novel category of stability known as multimode function stability has been explored. Implementing this form of stability enables the simultaneous realization of the aforementioned types of stability. It is also revealed that multimode function stability can be employed in image processing and pattern recognition to construct neural network architectures with multiple feature extraction modes [29]. In [30], the authors presented multimode function multistability along with its specific formula. The state space was partitioned into ni(2Hi+1) regions based on the positions of the zeros of boundary functions. Furthermore, through the application of the Lyapunov stability theorem and fixed point theorem, some associated criteria for multimode function multistability were obtained.

    As indicated by the preceding analysis, many previous papers either analyzed only the multistability of CGNNs with/without time-varying delays and Gaussian activation functions, or solely examined the multimode function multistability of neural networks with mixed delays. There are few works on studying the multimode function multistability of CGNNs with Gaussian activation functions and mixed time delays. Consequently, we are prepared to address the multimode function multistability of CGNNs with Gaussian activation functions and mixed time delays. To be specific, the advantage of this paper can be summarized in these aspects. First, this paper will focus on specific activation functions, namely, Gaussian functions. Through the utilization of the geometrical properties of Gaussian functions, the state space can be partitioned into 3μ subspaces, where 0μn. In contrast to the class of strictly nonlinear and monotonic activation functions considered in [30], the number of equilibrium points in this paper is explicitly explored. Second, multimode function multistability is discussed. Quite different from most of the existing literature concerning the multistability of CGNNs with Gaussian functions, the multimode function multistability results derived herein cover multiple asymptotic stability, multiple exponential stability, multiple polynomial stability and multiple logarithmic stability, so the results presented in this paper are more universal. Finally, relying on the geometric properties of Gaussian functions and a fixed point theorem, we deduce some sufficient conditions that guarantee the coexistence of precisely 3μ equilibria for an n-dimensional neural network, among which 2μ equilibrium points are multimode function stable and 3μ2μ equilibrium points are unstable. The results obtained here serve as a supplement to the existing relevant multimode function multistability criteria.

    Notations. In this article, for a given vector x=(x1,x2,...,xn)TRn, define x=max1in(|xi|), and ˜τ=max1jn(˜τj,supt0τj(t)). Define C([˜τ,0],D) as the Banach space of continuous functions ϕ: [˜τ,0]DRn. Let ϕ˜τ=max1in(sup˜τr0|ϕi(r)|).

    We introduce CGNNs with Gaussian activation functions and mixed time delays as follows:

    dxi(t)dt=mi(xi(t))(ηixi(t)+nj=1βijfj(xj(t))+nj=1γijfj(xj(tτj(t)))+nj=1φijtt˜τjfj(xj(s))ds+Ii),   i=1,2,...n, (2.1)

    where x(t)=(x1(t),x2(t),...,xn(t))TRn is state vector. ηi stands for the strength of self-inhibition. mi() is amplification. βij, γij and φij are connection weights. τj()0 represents time-varying delay, ˜τj in distributed delay term satisfies ˜τj>0. Ii denotes external input. fi() is a Gaussian function with the expression:

    fi(r)=exp((rci)2ρ2i), (2.2)

    where (2.2) satisfies fi(r)(0,1], for rR,ci>0 represents the center and ρi denotes the width.

    The initial value of (2.1) can be written as

    xi(r)=ϕi(r),   i=1,2,...,n. (2.3)

    Prior to the study, we need to recall some definitions and consider some assumptions which will be applied in subsequent content.

    Assumption 2.1. There are positive constants ˊmi and ´Mi, such that

    ˊmimi(r)´Mi,  rR.

    Definition 2.1. A constant vector x=(x1,...,xn)T is regarded as an equilibrium point of (2.1), if x satisfies

    ηixi+nj=1βijfj(xj)+nj=1γijfj(xj)+nj=1φij˜τjfj(xj)+Ii=0,  i=1,2,...,n.

    Definition 2.2. Suppose that xi(t) is the solution of neural network (2.1) with initial condition (2.3). A given set Θ can be referred to as a positive invariant set given that, if initial condition ϕi(t0)Θ, then xi(t)Θ for all tt0.

    Definition 2.3. Assume xD is an equilibrium point of (2.1), and DRn is a positively invariant set. Furthermore, suppose that (t) is a monotonically continuous and nondecreasing function for which (t)>0, for t0,(r)=(0),r[˜τ,0], and limt(t)=+. If

    x(t)xιϕx˜τ(t),t0,

    holds for any initial value ϕ(r)D,r[˜τ,0], where ι>0 is a positive constant, then (2.1) is locally multimode function stable.

    Calculating the first and second-order derivatives of activation function fi(r):

    fi(r)=2(rci)ρ2iexp((rci)2ρ2i),fi(r)=4(r(ci22ρi))(r(ci+22ρi))ρ4iexp((rci)2ρ2i),

    we can find fi(r) has one root ri=ci via solving the equation fi(r)=0. Analogously, by addressing the equation fi(r)=0, we can gain two roots of fi(r):

    Ci=ci22ρi,  C+i=ci+22ρi.

    For r(,Ci)(C+i,+), fi(r)>0, with regard to r(Ci,C+i), fi(r)<0, we can conclude that Ci and C+i are the maximum and minimum points of fi(r), separately. The maximum and minimum values of fi(r) are fi(Ci)=2exp(1/2)/ρi,fi(C+i)=2exp(1/2)/ρi, respectively. For the convenience of discussion, we define δi=2exp(1/2)/ρi,i=1,2,...,n.

    Since fi(r)(0,1] for all i=0,1,...,n, let

    ˇsi=nj=1,jimin{0,βij}+nj=1,jimin{0,γij}+nj=1,jimin{0,φij˜τj}+Ii,ˆsi=nj=1,jimax{0,βij}+nj=1,jimax{0,γij}+nj=1,jimax{0,φij˜τj}+Ii.

    Define the boundary functions:

    Wi(xi(t))=ηixi(t)+βiifi(xi(t))+ˇsi,W+i(xi(t))=ηixi(t)+βiifi(xi(t))+ˆsi,

    and simultaneously, we define

    ˉWi(xi(t))=ηixi(t)+βiifi(xi(t))+ˉsi,

    where ˉsi(ˇsi,ˆsi) is a constant. Then Wi(xi(t)), ˉWi(xi(t)), and W+i(xi(t)) are vertical shifts toward each other.

    Let N={1,2,...,n}. According to the specific values of the parameters ηi and βii, define

    L1={iN| 0<ηiβii<δi},L2={iN| ηiβii>δi},L3={iN| δi<ηiβii<0},   L4={iN| ηiβii<δi}.

    Lemma 2.1 ([16]). If iL1 or L3, then there are pi,qi such that ˉWi(pi)=ˉWi(qi)=0, where pi<Ri<qi<ci, if iL2 or L4, then ˉWi(r)<0 for rR.

    For the sake of discussion, the subsequent subsets of L1 and L3 are considered:

    L11={i=N| W+i(pi)<0,Wi(qi)>0},L21={i=N| W+i(qi)<0},L31={i=N| Wi(pi)>0},L13={i=N| W+i(pi)<0,Wi(qi)>0},L23={i=N| W+i(qi)<0},L33={i=N| Wi(pi)>0}.

    Lemma 2.2 ([16]). If iL11L13, then there exist three zeros ˇui,ˇvi,ˇλi for Wi(r) and three zeros ˆui,ˆvi,ˆλi for W+i(r), satisfying ˇui<ˆui<pi<ˆvi<ˇvi<qi<ˇλi<ˆλi.

    If iL21L23, then there exists one zero ˇoi for Wi(r), and one zero ˆoi for W+i(r), satisfying ˇoi<ˆoi<pi.

    If iL31L33, then there exists one zero ˇoi for Wi(r), and one zero ˆoi for W+i(r), satisfying qi<ˇoi<ˆoi.

    If iL2L4, then there exists one zero ˇoi for Wi(r), and one zero ˆoi for W+i(r), satisfying ˇoi<ˆoi.

    In what follows, the number of equilibrium points of (2.1) is explored. Let card Q represent the cardinality of a given set Q. Define μ = card(L11L13), k = card (L21L31L2L23L33L4), and let

    ˉLi={[ˇoi,ˆoi]},˜Li={[ˇui,ˆui],[ˆvi,ˇvi],[ˇλi,ˆλi]},Θ={ni=1li,li˜Li or liˉLi}.

    The following assumption is required so as to ascertain the number of equilibrium points of (2.1).

    Assumption 3.1. k+μ=n.

    Consequently, it can be seen that there exist 3μ elements in Θ.

    Theorem 3.1. Suppose Assumption 3.1 holds. Further assume that

    nj=1,jiδj|βij|+nj=1δj|γij|+nj=1δj|φij|˜τj<Fi, (3.1)

    where iN, and Fi is given in Table 1. Then, neural network (2.1) has accurately 3μ equilibria in Rn.

    Table 1.  The value of Fi.
    i Fi
    iL11 min{βiimin(fi(ˇvi),fi(ˆvi))ηi,ηiβiimax(fi(ˆui),fi(ˆλi),fi(ˇλi))}
    iL21 ηiβiifi(ˆoi)
    iL31 ηiβiimax{fi(ˆoi),fi(ˇoi)}
    iL2 ηiβiiδi
    iL13 min{βiimax(fi(ˇvi),fi(ˆvi))ηi,ηiβiimin(fi(ˆui),fi(ˇui),fi(ˇλi))}
    iL21 ηiβiimin{fi(ˆoi),fi(ˇoi)}
    iL31 ηiβiifi(ˇoi)
    iL2 ηi+βiiδi

     | Show Table
    DownLoad: CSV

    Proof. We first demonstrate the existence of equilibrium points of (2.1) for any Θ(1)=ni=1li=ni=1[di,gi]Θ.

    With regard to any given x=(x1,x2,...,xn)T and index iN, define the following function:

    Wi(r)=ηir+βiifi(r)+nj=1,jiβijfj(xj)+nj=1γijfj(xj)+nj=1φij˜τjfj(xj)+Ii.

    Comparing Wi(r) with W+i(r),Wi(r), we can get that Wi(r)Wi(r)W+i(r) for r[di,gi]. Then, two cases will be considered.

    Case 1: when li=[ˆvi,ˇvi], we can obtain

    Wi(di)W+i(di)=0,Wi(gi)Wi(gi)=0.

    Case 2: when li[ˆvi,ˇvi], we get

    Wi(di)Wi(di)=0,Wi(gi)W+i(gi)=0.

    Taken together, Wi(di)Wi(gi)0, whereupon there exists a ˉxi[di,gi] satisfying Wi(ˉxi)=0 for i=1,2,...,n. Define a continuous mapping Ξ : Θ(1)Θ(1), Ξ(x1,x2,...,xn)=(ˉx1,ˉx2,...,ˉxn)T. By virtue of a fixed point theorem, we can assert the existence of a fixed point x=(x1,x2,...,xn)T of Ξ, which also serves as an equilibrium point for (2.1).

    Following that, we are prepared to certify the uniqueness of equilibrium points in Θ(1). For any x,yΘ(1), hypothesize that Ξ(x)=x,Ξ(y)=y and x,y are both roots of Wi(r).

    Hence,

    ηixi+βiifi(xi)+nj=1,jiβijfj(xj)+nj=1γijfj(xj)+nj=1φij˜τjfj(xj)+Ii=0, (3.2)
    ηiyi+βiifi(yi)+nj=1,jiβijfj(yj)+nj=1γijfj(yj)+nj=1φij˜τjfj(yj)+Ii=0. (3.3)

    Subtracting (3.3) from (3.2), it follows that

    |ηi(xiyi)+βii(fi(xi)fi(yi))|=|ηiβiifi(ξi)||xiyi|nj=1,ji|βij|δj|xjyj|+nj=1|γij|δj|xjyj|+nj=1|φij|˜τjδj|xjyj|,

    where min(xi,yi)ξimax(xi,yi). In the following, eight situations are discussed.

    Case 1: iL11.

    If ξi[ˇui,ˆui], we have fi(ξi)ηiβii and 0<fi(ˇui)fi(ξi)fi(ˆui); hence

    |ηiβiifi(ξi)|=ηiβiifi(ξi)ηiβiifi(ˆui)Fi.

    If ξi[ˆvi,ˇvi], we can get fi(ξi)ηiβii, and 0<min{fi(ˇvi),fi(ˆvi)}fi(ξi)δi, then

    |ηiβiifi(ξi)|=βiifi(ξi)ηiβiimin{fi(ˇvi),fi(ˆvi)}ηiFi.

    If ξi[ˇλi,ˆλi], we can obtain fi(ξi)ηiβii, and δifi(ξi)max{fi(ˇλi),fi(ˆλi)}, then

    |ηiβiifi(ξi)|=ηiβiifi(ξi)ηiβiimax{fi(ˇλi),fi(ˆλi)}Fi.

    Case 2: iL21. In this case, ξi[ˇoi,ˆoi], we can know fi(ξi)ηiβii, and 0<fi(ˇoi)fi(ξi)fi(ˆoi). Hence,

    |ηiβiifi(ξi)|ηiβiifi(ˆoi)Fi.

    Case 3: iL31. In this case, fi(ξi)ηiβii and fi(ξi)max{fi(ˇoi),fi(ˆoi)}. Hence,

    |ηiβiifi(ξi)|ηiβiimax{fi(ˇoi),fi(ˆoi)}Fi.

    Case 4: iL2. In this case, ξi[ˇoi,ˆoi],fi(ξi)δi<ηiβii, so we can get

    |ηiβiifi(ξi)|=ηiβiifi(ξi)ηiβiiδiFi.

    Case 5: iL13.

    If ξi[ˇui,ˆui], we have βii<0, fi(ξi)ηiβii, and min{fi(ˇui),fi(ˆui)}fi(ξi)δi. Hence,

    |ηiβiifi(ξi)|=ηiβiifi(ξi)ηiβiimin{fi(ˇui),fi(ˆui)}Fi.

    If ξi[ˆvi,ˇvi], we can get βii<0,fi(ξi)ηiβii, and δifi(ξi)max{fi(ˇvi),fi(ˆvi)}<0. Then,

    |ηiβiifi(ξi)|=βiifi(ξi)ηiβiimax{fi(ˇvi),fi(ˆvi)}ηiFi.

    If ξi[ˇλi,ˆλi], we can obtain βii<0,fi(ξi)ηiβii, and fi(ˇλi)fi(ξi)fi(ˆλi)<0. Then,

    |ηiβiifi(ξi)|=ηiβiifi(ξi)ηiβiifi(ˇλi)Fi.

    Case 6: iL23. In this case, βii<0,ξi[ˇoi,ˆoi]. We can know fi(ξi)ηiβii, and min{fi(ˇoi),fi(ˆoi)}fi(ξi)δi. Hence,

    |ηiβiifi(ξi)|ηiβiimin{fi(ˇoi),fi(ˆoi)}Fi.

    Case 7: iL33. In this case, βii<0,fi(ξi)ηiβii and fi(ˇoi)fi(ξi)fi(ˆoi)<0. Hence,

    |ηiβiifi(ξi)|ηiβiifi(ˇoi)Fi.

    Case 8: iL4. In this case, βii<0,ξi[ˇoi,ˆoi], and fi(ξi)δi>ηiβii, so we can get

    |ηiβiifi(ξi)|=ηiβiifi(ξi)ηi+βiiδiFi.

    Based on the above discussion,

    ||Ξ(x)Ξ(y)||=max1in(|Ξi(x)Ξi(y)|)max1in(|xiyi|)max1in(1|ηiβiifi(ξi)|(nj=1,ji|βij|δj|xjyj|+nj=1|γij|δj|xjyj|+nj=1|φij|˜τjδj|xjyj|))max1in(nj=1,jiδj|βij|+nj=1δj|γij|+nj=1δj|φij|˜τjFi)xy=Δxy,

    where Δ=max1in(nj=1,jiδj|βij|+nj=1δj|γij|+nj=1δj|φij|˜τjFi), and Fi is described in Table 1.

    Recalling (3.1), Δ<1. Consequently, Ξ is a contraction mapping in Θ(1)Θ. Hence, a unique equilibrium point exists within Θ(1). From Assumption 3.1, the number of elements of Θ is 3μ, so the neural network (2.1) has exactly 3μ unique equilibrium points.

    From the discussion in the preceding subsection, we have obtained that there are exactly 3μ equilibrium points. In this subsection, we will inquire into the multimode function stability of 3μ equilibria for CGNNs with Gaussian activation functions and mixed time delays. For this purpose, the invariant set needs to be specified.

    Define

    ˉLϱi={[ˇoiϱ,ˆoi+ϱ]},Lϱi={[ˇuiϱ,ˆui+ϱ],[ˇλiϱ,ˆλi+ϱ]},˜Lϱi={[ˇuiϱ,ˆui+ϱ],[ˆviϱ,ˇvi+ϱ],[ˇλiϱ,ˆλi+ϱ]},Θϱi={ni=1li,liLϱi or liˉLϱi},˜Θϱi={ni=1li,li˜Lϱi or liˉLϱi},ˇΘϱi=˜ΘϱiΘϱi,

    where 0<ϱ<min1in(ϱi), and define

    ϱi={min(piˆui,ˇλiqi),iL11L13,piˆoi,iL21L23,ˇoiqi,iL31L33,1,iL2L4.

    Let Θϱ(1)=ni=1[ˇiϱ,ˆi+ϱ] and ˇΘϱ(1)=ni=1[ˇϵiϱ,ˆϵi+ϱ] be elements of Θϱi and ˇΘϱi, respectively.

    Remark 3.1. Under the condition of Theorem 3.1, it is observed that there are exactly 2μ elements in Θϱi and 3μ2μ elements in ˇΘϱi.

    Theorem 3.2. Suppose Assumption 3.1 holds. Then, Θϱ(1)Θϱi is a positive invariant set for initial state of (2.1) with ϕi(t0)Θϱ(1).

    Proof. For any initial value ϕi(s)C([˜τ,0],D), if ϕi(t0)[ˇiϱ,ˆi+ϱ], we require that the corresponding solution xi(t) of (2.1) meets xi(t)[ˇiϱ,ˆi+ϱ] for all tt0. Otherwise, there must exist an index i, t2>t1>t0, and ω which is an adequately small positive number, such that

    {xi(t1)=ˆi+ϱ,xi(t2)=ˆi+ϱ+ω,xi(t1)0. (3.4)

    On the other hand, it is not difficult to observe that for any element [ˇiϱ,ˆi+ϱ]Θϱi, W+i(ˆi+ϱ)<0, then

    dxi(t)dt|t=t1mi(xi(t1))W+i(xi(t1))<0.

    This is in contradiction to xi(t1)0. Then, xi(t)ˆi+ϱ. Likewise, we can prove that xi(t)ˇiϱ, for tt0 and i=1,2,...,n. Accordingly, each set in Θϱi is a positive invariant set.

    Remark 3.2. From Remark 3.1, there exist 2μ elements in Θϱi, so the number of positively invariant sets is 2μ for initial state ϕi(t0)Θϱ(1) of neural network (2.1).

    Below, we will investigate whether the equilibria located in the positive invariant sets are multimode function stable for neural network (2.1). For this reason, we need to introduce the following assumption and lemma.

    Assumption 3.2. (t) is a monotonically continuous and non-decreasing function. It satisfies (t)>0 for t0, and (r)=(0),r[˜τ,0]. Further suppose

    d(t)dt/(t)=εP(t),   t0,

    holds, where P() is a monotonically nondecreasing nonnegative function, and ε>0 is a constant.

    Hence, it is easy to obtain that d(t)dt/(t)ε.

    Lemma 3.1 [26]. Suppose that Assumption 3.2 holds. Then,

    (t)(tζ)(ζ)(0),  t0,

    where ζ[˜τ,0] is a constant.

    Let xΘϱ(1) be an equilibrium point of (2.1). Define

    υ(t)=x(t)x,

    where x(t)=(x1(t),x2(t),...,xn(t))T is the solution of neural network (2.1) and its initial condition ϕ(r)Θϱ(1),r[˜τ,0].

    Thereupon,

    dυi(t)dt=mi(υi(t)+xi)(ηiυi(t)+βii(fi(υi(t)+xi)fi(xi))+nj=1,jiβij(fj(υj(t)+xj)fj(xj))+nj=1γij(fj(υj(tτj(t))+xj)fj(xj))+nj=1φijtt˜τj(fj(υj(s)+xj)fj(xj))ds). (3.5)

    For convenience, let Fi(t)=fi(υi(t)+xi)fi(xi). Hence, from (3.5)

    d|υi(t)|dt=sign(υi(t))dυi(t)dt=sign(υi(t))mi(υi(t)+xi)(ηiυi(t)+βiiFi(t)+nj=1,jiβijFj(t)+nj=1γijFj(tτj(t))+nj=1φijtt˜τjFj(s)ds). (3.6)

    Consider the following expression:

    sign(υi(t))mi(υi(t)+xi)βiiFi(t)=mi(υi(t)+xi)βii|υi(t)|Fi(t)υi(t).

    When iL11, if li=[ˇuiϱ,ˆui+ϱ],

    0<fi(ˇuiϱ)<Fi(t)υi(t)<fi(ˆui+ϱ),

    and if li=[ˇλiϱ,ˆλi+ϱ],

    Fi(t)υi(t)<max(fi(ˇλiϱ),fi(ˆλi+ϱ)).

    When iL21, li=[ˇoiϱ,ˆoi+ϱ],

    0<fi(ˇoiϱ)<Fi(t)υi(t)<fi(ˆoi+ϱ).

    When iL31, li=[ˇoiϱ,ˆoi+ϱ],

    Fi(t)υi(t)<max(fi(ˇoiϱ),fi(ˆoi+ϱ)).

    When iL2L4, li=[ˇoiϱ,ˆoi+ϱ], δiFi(t)υi(t)δi.

    When iL13, if li=[ˇuiϱ,ˆui+ϱ],

    min{fi(ˇuiϱ),fi(ˆui+ϱ)}<Fi(t)υi(t)<δi,

    if li=[ˇλiϱ,ˆλi+ϱ],

    fi(ˇλiϱ)<Fi(t)υi(t)<fi(ˆλi+ϱ).

    When iL23, li=[ˇoiϱ,ˆoi+ϱ],

    min{fi(ˇoiϱ),fi(ˆoi+ϱ)}<Fi(t)υi(t)<δi.

    When iL33, li=[ˇoiϱ,ˆoi+ϱ],

    fi(ˇoiϱ)<Fi(t)υi(t)<fi(ˆoi+ϱ).

    Taking into account these cases, we can get

    sign(υi(t))mi(υi(t)+xi)βiiFi(t)βiimi(υi(t)+xi)|υi(t)|Ψi, (3.7)

    where Ψi is described in Table 2.

    Table 2.  The value of Ψi.
    i Ψi
    iL11 max{fi(ˆui+ϱ),fi(ˇλiϱ),fi(ˆλi+ϱ)}
    iL21 fi(ˆoi+ϱ)
    iL31 max{fi(ˆoi+ϱ),fi(ˇoiϱ)}
    iL2 δi
    iL13 min{fi(ˆui+ϱ),fi(ˇuiϱ),fi(ˇλi+ϱ)}
    iL23 min{fi(ˆoi+ϱ),fi(ˇoiϱ)}
    iL33 fi(ˇoiϱ)
    iL4 δi

     | Show Table
    DownLoad: CSV

    Theorem 3.3. Assume the conditions of Assumptions 2.1–3.2 are satisfied. Further suppose that there are n positive constants σ1,σ2,...,σn such that

    (ηiβiiΨi1σinj=1,ji|βij|Ψjσjε)1σinj=1|γij|Ψjσj(˜τ)(0)1σinj=1|ηij|Ψjσj(˜τj)˜τj(0)>0, (3.8)

    holds for i=1,2,...,n. Then, there are 2μ equilibria which are locally multimode function stable, and 3μ2μ equilibrium points are unstable in (2.1).

    Proof. Based on the analysis in the previous subsection, there exist exactly 2μ equilibria in Θϱi. Our objective now is simply to prove 2μ equilibria are multimode function stable in Θϱi, while other equilibria in ˇΘϱi are unstable.

    Take

    ϖ(t)=max1in(|υi(t)|σi),˜ϖ(t)=(t)ϖ(t),ˆϖ(t)=sup˜τrt˜ϖi(r),

    and there must be some κ{1,2,...,n} such that ϖ(t)=|υκ(t)|σκ.

    Under (3.6), we get

    dϖ(t)dt=1σκd|υκ(t)|dt=sign(υκ(t))σκmκ(υκ(t)+xκ)(ηκυκ(t)+βκκFκ(t)+nj=1,jκβκjFj(t)+nj=1γκjFκ(tτj(t))+nj=1φκjtt˜τjFj(s)ds).

    Note that

    sign(υκ(t))σκmκ(υκ(t)+xκ)nj=1,jκβκjFj(t)sign(υκ(t))σκmκ(υκ(t)+xκ)nj=1,jκ|βκj|Ψjυj(t)1σκmκ(υκ(t)+xκ)nj=1,jκ|βκj|Ψjϖ(t)σj,
    sign(υκ(t))σκmκ(υκ(t)+xκ)nj=1γκjFj(tτj(t))1σκmκ(υκ(t)+xκ)nj=1γκjΨj|υκ(tτj(t))|1σκmκ(υκ(t)+xκ)nj=1|γκj|Ψjσjϖ(tτj(t)),

    and

    sign(υκ(t))σκmκ(υκ(t)+xκ)nj=1φκjtt˜τjFj(s)ds1σκmκ(υκ(t)+xκ)nj=1|φκj|Ψjtt˜τj(|υj(s)|σjσj)ds1σκmκ(υκ(t)+xκ)nj=1|φκj|σjΨjtt˜τj(s)ϖ(s)(s)ds1σκmκ(υκ(t)+xκ)nj=1|φκj|σjΨjˆϖj(t)tt˜τj1(s)ds.

    Combining with the above calculation and (3.7), we can obtain

    dϖ(t)dtmκ(υκ(t)+xκ)(ηκβκκΨκ1σκnj=1,jκ|βκj|Ψjσj)ϖ(t)+mκ(υκ(t)+xκ)σκnj=1|γκj|Ψjσjϖ(tτj(t))+mκ(υκ(t)+xκ)σκnj=1Ψj|φκj|σjtt˜τj1(s)dsˆϖj(t). (3.9)

    By invoking (3.9),

    d˜ϖ(t)dt=d(ϖ(t)(t))dt=ϖ(t)d(t)dt+(t)dϖ(t)dtϖ(t)d(t)dt+(t)(mκ(υκ(t)+xκ)(ηκβκκΨκ1σκnj=1,jκ|βκj|σj)ϖ(t)+mκ(υκ(t)+xκ)σκnj=1|γκj|Ψjσjϖ(tτj(t))+mκ(υκ(t)+xκ)σκnj=1Ψj|φκj|σjtt˜τj1(s)dsˆϖj(t))mκ(υκ(t)+xκ)(ηκβκκΨκ1σκnj=1,jκ|βκj|σjΨjd(t)dt(t))ϖ(t)(t)+mκ(υκ(t)+xκ)σκnj=1|γκj|Ψjσj(t)ϖ(tτj(t))+mκ(υκ(t)+xκ)σκnj=1Ψj|φκj|σjtt˜τj(t)(s)dsˆϖj(t).

    From Lemma 3.1,

    (t)ϖ(tτj(t))=(t)(tτj(t))(tτj(t))ϖ(tτj(t))(t)(t˜τ)˜ϖ(tτj(t))(˜τ)(0)ˆϖ(t),
    tt˜τj(t)(s)dstt˜τj(t)(t˜τj)ds(˜τj)˜τj(0).

    Hence,

    d˜ϖ(t)dtmκ(υκ(t)+xκ)(ηκβκκΨκ1σκnj=1,jκ|βκj|Ψjσjε)˜ϖ(t)+mκ(υκ(t+xκ)σκnj=1|γκj|Ψjσj(˜τ)(0)˜ϖ(t)+mκ(υκ(t)+xκ)σκnj=1|φκj|Ψjσj(˜τj)˜τj(0)˜ϖ(t). (3.10)

    In the following, it is demanded that

    ˜ϖ(t)˜ϖ(0), t0, (3.11)

    and if not, there must be certain T>0 satisfying

    ˜ϖ(T)=ˆϖ(T)>ˆϖ(0)0.

    In other words,

    {˜ϖ(t)<ˆϖ(T),  t[˜τ,T],˜ϖ(t)=ˆϖ(T),   t=T,

    which means that

    d˜ϖ(t)dt|t=T0. (3.12)

    From the above discussion and (3.10), we can derive

    d˜ϖ(t)dt|t=Tmκ(υκ(T)+xκ)((ηκβκκΨκ1σκnj=1,jκ|βκj|Ψjσjε)1σκnj=1|γκj|Ψjσj(˜τ)(0)1σκnj=1|φκj|Ψjσj(˜τj)˜τj(0))˜ϖ(T)<0.

    The result contradicts with (3.12). Hence, (3.11) holds, which implies

    x(t)x=υ(t)ˆσmax1in(|υi(t)|σi)ˆσ˜ϖ(t)(t)ˆσ˜ϖ(0)(t)ˆσsup˜τr0(ϖ(r)(r))(t)=ˆσ(0)sup˜τr0ϖ(r)(t)ιϕx˜τ(t),

    where ι=ˆσ(0)ˇσ, ˆσ=max1in(σi), ˇσ=min1in(σi). The proof is accomplished.

    We offer two numerical examples of 2-dimensional CGNNs with Gaussian activation functions and mixed time delays to show the efficacy of theoretical results in this subsection.

    Example 4.1. Consider the 2-dimensional CGNNs with Gaussian activation functions and mixed time delays presented below:

    {dx1(t)dt=(4+sin(x1(t))(x1(t)+3.3f1(x1(t))+0.08f2(x2(t))0.03f1(x1(tτ1(t)))+0.12f2(x2(tτ2(t)))+0.01tt˜τ1f1(x1(s))ds+0.02tt˜τ2f2(x2(s))ds2.5),dx2(t)dt=(5+sin(x2(t))(1.2x2(t)+0.16f1(x1(t))+1.1f2(x2(t))0.01f1(x1(tτ1(t)))+0.02f2(x2(tτ2(t)))+0.01tt˜τ1f1(x1(s))ds+0.01tt˜τ2f2(x2(s))ds3), (4.1)

    where Gaussian activation functions f1(r)=f2(r)=exp(r2), τ1(t)=1.5+cos(t),τ2(t)=2t1+t,˜τ1=1.1,˜τ2=1.2.

    It can be gained apparently that

    ρ1=ρ2=1,c1=c2=0,
    δ1=δ2=2exp(1/2)0.8578.

    Since m1(x1(t))=4+sin(x1(t))[3,5], m2(x2(t))=5+sin(x2(t))[4,6], Assumption 2.1 is met. Moreover ˜τ=2.5, ˇs1=2.53, ˆs1=2.276, ˇs2=3.01, ˆs2=2.829. Hence, the boundary functions are as follows:

    W1(r)=r+3.3exp(r2)2.53,W+1(r)=r+3.3exp(r2)2.276,W2(r)=r+1.1exp(r2)3.01,W+2(r)=r+1.1exp(r2)2.829,

    where the graphs of these boundary functions are described as Figures 1 and 2.

    Figure 1.  The bounding functions w1(r) and w+1(r) in Example 4.1.
    Figure 2.  The bounding functions w2(r) and w+2(r) in Example 4.1.

    Additionally,

    0<η1β110.3030<δ10.8578,
    η2β221.0909>δ20.8578,

    which demonstrates that 1L1,2L2.

    By means of further calculations, we can obtain that μ=1. ˇu12.5270, ˆu12.2570, ˇv10.9550, ˆv10.8480, ˇλ10.3629, ˆλ10.4364, p11.5261, q10.1441. Also, W+1(p1)=0.4285<0,W1(q1)=0.8463>0. Therefore, 1L11.

    Furthermore, ˇo23.0699, ˆo22.8286. f1(ˇv1)=0.6621, f1(ˆv1)=0.4270, f1(ˆu1)=0.1131, f1(ˇλ1)=1.2914, f1(ˆλ1)=1.0235. By computation, F10.4091, F20.2564.

    Then,

    |β12|δ2+(|γ11|+|γ12|)δ2+(|φ11|˜τ1+|φ12|˜τ2)δ2=0.2187<F1,
    |β21|δ1+(|γ21|+|γ22|)δ1+(|φ21|˜τ1+|φ22|˜τ2)δ2=0.1827<F2

    are met. Hence, depending on Theorem 3.1, there are 3 equilibria for (4.1). By applying MATLAB, we find that these equilibrium points are (-0.8174, -2.4286), (-2.4930, -2.4979), and (0.3674, -2.3795), respectively.

    Moreover, ϱ1=min(p1ˆu1,ˇλ1q1)0.5070, ϱ2=1, so let ϱ=0.1. From Theorem 3.2, there exist two positively invariant sets, which are [2.627,2.156]×[3.1699,2.7286], and [0.3529,0.5364]×[3.1699,2.7286].

    Next, we need to check out the stability condition (3.8) in Theorem 3.3. Select σ1=σ2=1, Ψ1=max(f1(ˆui+ϱ),f1(ˇλ1ϱ)0,f1(ˆλ1+ϱ))=0.1589, Ψ20 Now, we let (t) be an exponential function with the expression (t)=exp(0.06t), so ε=0.06. By further calculating

    (13.3×0.15890.08×0.85780.06)(0.03×0.1589+0.12×0.8578)exp(0.15)(0.01×0.1589×exp(0.066)×1.1+0.02×0.8578×exp(0.072)×1.2)=0.1981>0,(1.21.1×0.85780.16×0.15890.06)(0.01×0.1589+0.02×0.8578)exp(0.12)(0.01×0.1589×exp(0.066)×1.1+0.01×0.8578×exp(0.072)×1.2)=0.1364>0.

    The result shows that (3.8) holds, that is, the equilibrium points (2.4930,2.4979) and (0.3674,2.3795) are multimode function stable, whereas the equilibrium point (0.8174,2.4286) is unstable. The trajectory behavior of (4.1) and the equilibrium points are illustrated by Figures 35.

    Figure 3.  Transient behavior of x1 in Example 4.1.
    Figure 4.  Transient behavior of x2 in Example 4.1.
    Figure 5.  The transient behavior of (x1,x2) of (4.1).

    Example 4.2. Consider the 2-dimensional CGNNs with Gaussian activation functions and mixed time delays presented below:

    {dx1(t)dt=(2+sin(x1(t))(x1(t)+6.3f1(x1(t))+0.2f2(x2(t))+0.1f1(x1(tτ1(t)))+0.5f2(x2(tτ2(t)))+0.03tt˜τ1f1(x1(s))ds+0.02tt˜τ2f2(x2(s))ds4.5),dx2(t)dt=(4+cos(x2(t))(1.5x2(t)+0.66f1(x1(t))+7.5f2(x2(t))+0.1f1(x1(tτ1(t)))+0.3f2(x2(tτ2(t)))+0.02tt˜τ1f1(x1(s))ds+0.01tt˜τ2f2(x2(s))ds4), (4.2)

    where Gaussian activation functions f1(r)=f2(r)=exp(r2), τ1(t)=1+0.5sin(t),τ2(t)=t1+t,˜τ1=1.15,˜τ2=1.22.

    It can be gained apparently that

    ρ1=ρ2=1,c1=c2=0,
    δ1=δ2=2exp(1/2)0.8578.

    Since m1(x1(t))=2+sin(x1(t))[1,3], m2(x2(t))=4+cos(x2(t))[3,5], Assumption 2.1 is met. Moreover ˜τ=1.5, ˇs1=4.5, ˆs1=3.7756, ˇs2=5, ˆs2=4.217. Hence, the boundary functions are as follows:

    W1(r)=r+6.3exp(r2)4.5,W+1(r)=r+6.3exp(r2)3.7756,W2(r)=r+7.5exp(r2)5,W+2(r)=r+7.5exp(r2)4.217,

    where the graphs of these boundary functions are portrayed in Figures 6 and 7.

    Figure 6.  The bounding functions w1(r) and w+1(r) in Example 4.2.
    Figure 7.  The bounding functions w2(r) and w+2(r) in Example 4.2.

    Additionally,

    0<η1β110.1587<δ10.8578,
    0<η2β22=0.2<δ20.8578,

    which implies that 1L11,2L11, and μ=2.

    By means of further calculations, ˇu14.5, ˆu13.7756, ˆv10.8821, ˇv10.7135, ˇλ10.4841, ˆλ10.6031, p11.7400, q10.1230. Also, ˇu25, ˆu24.217, ˆv20.9039,ˇv20.7543, ˇλ20.5489, ˆλ20.6566, p21.8380, q20.02.

    Furthermore, f1(ˇv1)0.8577, f1(ˆv1)0.8103, f1(ˆu1)0, f1(ˇλ1)0.7659, f1(ˆλ1)0.8384, f2(ˇv1)0.7986, f2(ˆv1)0.8540, f2(ˆu1)0, f2(ˇλ1)0.8122, f2(ˆλ1)0.8533. By computation, F1=1, F2=1.5. Then,

    |β12|δ2+(|γ11|+|γ12|)δ2+(|φ11|˜τ1+|φ12|˜τ2)δ2=0.7368<F1,
    |β21|δ1+(|γ21|+|γ22|)δ1+(|φ21|˜τ1+|φ22|˜τ2)δ2=0.9395<F2

    are met. Hence, according to Theorem 3.1, there are 32=9 equilibrium points for (4.2). Moreover, ϱ1=min(p1ˆu1,ˇλ1q1)0.6071, ϱ2=min(p2ˆu2,ˇλ2q2)0.5689, so let ϱ=0.1. From Theorem 3.2, there exist four positively invariant sets, which are [4.6,3.6756]×[5.1,4217], [0.3841,0.7031]×[5.1,4217], [4.6,3.6756]×[0.4489,0.7566], [0.3841,0.7031]×[0.4489,0.7566]. By applying MATLAB, the stable equilibrium points are (-4.441, -2.639), (0.498, -2.355), (0.5126, 0.7593), and (-4.193, 0.6693), respectively.

    Next, we need to check out the stability condition (3.8) of Theorem 3.3. Select σ1=σ2=1, Ψ1=max(f1(ˆui+ϱ),f1(ˇλ1ϱ),f1(ˆλ1+ϱ)), Ψ2=0.8578. Now, we let (t) be a logarithmic function, where P(t)=ln(t+8.0101), ε=0.5, so (t)=0.51(t+8.0101)ln(t+8.0101). By further calculating

    η1ε=0.5>0,η2ε=1>0.

    The result shows that (3.8) holds, that is, equilibrium points (-4.441, -2.639), (0.498, -2.355), (0.5126, 0.7593), and (-4.193, 0.6693) are multimode function stable. The trajectory behavior of (4.2) as well as the equilibrium points in this case are portrayed by Figures 810.

    Figure 8.  Transient behavior of x1 in Example 4.2.
    Figure 9.  Transient behavior of x2 in Example 4.2.
    Figure 10.  The transient behavior of (x1,x2) of (4.2).

    In this paper, we probe into multimode function multistability of CGNNs with Gaussian activation functions and mixed time delays. Specifically, on account of the special geometric properties of Gaussian functions, the state space of an n-dimensional CGNNs can be divided into 3μ subspaces (0μn), further exploiting Brouwer's fixed point theorem and contraction mapping, we conclude that there exists an equilibrium point for each subspace, that is, there are exactly 3μ equilibria for CGNNs with Gaussian activation functions and mixed time delays. Subsequently, by analyzing the invariance sets, it is deduced that 2μ equilibrium points are multimode function stable, while 3μ2μ equilibrium points are unstable. This work extends the existing results concerning the multistability of multimode functions, offering effective assistance in the dynamic analysis of CGNNs with specific activation functions and mixed time delays.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors declare that there are no conflicts of interest.



    [1] L. Wan, Q. H. Zhou, Stability analysis of neutral-type Cohen-Grossberg neural networks with multiple time-varying delays, IEEE Access, 8 (2020), 27618–27623. https://doi.org/10.1109/access.2020.2971839 doi: 10.1109/access.2020.2971839
    [2] F. H. Zhang, Z. G. Zeng, Multiple Mittag-Leffler stability of delayed fractional-order Cohen-Grossberg neural networks via mixed monotone operator pair, IEEE Trans. Cybernet., 51 (2021), 6333–6344. https://doi.org/10.1109/tcyb.2019.2963034 doi: 10.1109/tcyb.2019.2963034
    [3] Y. L. Huang, S. H. Qiu, S. Y. Ren, Z. W. Zheng, Fixed-time synchronization of coupled Cohen-Grossberg neural networks with and without parameter uncertainties, Neurocomputing, 315 (2018), 157–168. https://doi.org/10.1016/j.neucom.2018.07.013 doi: 10.1016/j.neucom.2018.07.013
    [4] Y. Wan, J. Cao, G. H. Wen, W. W. Yu, Robust fixed-time synchronization of delayed Cohen-Grossberg neural networks, Neural Netw., 73 (2016), 86–94. https://doi.org/10.1016/j.neunet.2015.10.009 doi: 10.1016/j.neunet.2015.10.009
    [5] D. S. Wang, L. H. Huang, L. K. Tang, J. S. Zhuang, Generalized pinning synchronization of delayed Cohen-Grossberg neural networks with discontinuous activations, Neural Netw., 104 (2018), 80–92. https://doi.org/10.1016/j.neunet.2018.04.006 doi: 10.1016/j.neunet.2018.04.006
    [6] J. Xiao, Z. G. Zeng, A. L. Wu, S. P. Wen, Fixed-time synchronization of delayed Cohen-Grossberg neural networks based on a novel sliding mode, Neural Netw., 128 (2020), 1–12. https://doi.org/10.1016/j.neunet.2020.04.020 doi: 10.1016/j.neunet.2020.04.020
    [7] Z. G. Zeng, W. X. Zheng, Multistability of two kinds of recurrent neural networks with activation functions symmetrical about the origin on the phase plane, IEEE Trans. Neural Netw. Learn. Syst., 24 (2013), 1749–1762. https://doi.org/10.1109/tnnls.2013.2262638 doi: 10.1109/tnnls.2013.2262638
    [8] P. Liu, Z. G. Zeng, J. Wang, Multistability of recurrent neural networks with nonmonotonic activation functions and mixed time delays, IEEE Trans. Syst. Man Cybernet. Syst., 46 (2015), 512–523. https://doi.org/10.1109/tsmc.2015.2461191 doi: 10.1109/tsmc.2015.2461191
    [9] P. Liu, Z. G. Zeng, J. Wang, Multiple Mittag-Leffler stability of fractional-order recurrent neural networks, IEEE Trans. Syst. Man Cybernet. Syst., 47 (2017), 2279–2288. https://doi.org/10.1109/tsmc.2017.2651059 doi: 10.1109/tsmc.2017.2651059
    [10] X. Si, Z. Wang, Y. Fan, X. Huang, H. Shen, Sampled-data-based bipartite leader-follower synchronization of cooperation-competition neural networks via interval-scheduled looped-functions, IEEE Trans. Circuits Syst. I, 70 (2023), 3723–3734. https://doi.org/10.1109/tcsi.2023.3284858 doi: 10.1109/tcsi.2023.3284858
    [11] P. Liu, Z. G. Zeng, J. Wang, Multistability analysis of a general class of recurrent neural networks with non-monotonic activation functions and time-varying delays, Neural Netw., 79 (2016), 117–127. https://doi.org/10.1016/j.neunet.2016.03.010 doi: 10.1016/j.neunet.2016.03.010
    [12] L. G. Wan, Z. X. Liu, Multiple O(tq) stability and instability of time-varying delayed fractional-order Cohen-Grossberg neural networks with Gaussian activation functions, Neurocomputing, 454 (2021), 212–227. https://doi.org/10.1016/j.neucom.2021.05.018 doi: 10.1016/j.neucom.2021.05.018
    [13] P. Liu, Z. G. Zeng, J. Wang, Multistability of delayed recurrent neural networks with Mexican hat activation functions, Neural Comput., 29 (2017), 423–457. https://doi.org/10.1162/NECO_a_00922 doi: 10.1162/NECO_a_00922
    [14] O. Gundogdu, E. Egrioglu, C. H. Aladag, U. Yolcu, Multiplicative neuron model artificial neural network based on Gaussian activation function, Neural Comput. Appl., 27 (2016), 927–935. https://doi.org/10.1007/s00521-015-1908-x doi: 10.1007/s00521-015-1908-x
    [15] R. Kamimura, Cooperative information maximization with Gaussian activation functions for self-organizingmaps, IEEE Trans. Neural Netw., 17 (2006), 909–918. https://doi.org/10.1109/TNN.2006.875984 doi: 10.1109/TNN.2006.875984
    [16] P. Liu, Z. G. Zeng, J. Wang, Complete stability of delayed recurrent neural networks with Gaussian activation functions, Neural Netw., 85 (2017), 21–32. https://doi.org/10.1016/j.neunet.2016.09.006 doi: 10.1016/j.neunet.2016.09.006
    [17] P. P. Liu, X. B. Nie, J. L. Liang, J. D. Cao, Multiple Mittag-Leffler stability of fractional-order competitive neural networks with Gaussian activation functions, Neural Netw., 108 (2018), 452–465. https://doi.org/10.1016/j.neunet.2018.09.005 doi: 10.1016/j.neunet.2018.09.005
    [18] L. Yao, Z. Wang, X. Huang, Y. Li, Q. Ma, H. Shen, Stochastic sampled-data exponential synchronization of markovian jump neural networks with time-varying delays, IEEE Trans. Neural Netw. Learn. Syst., 34 (2023), 909–920. https://doi.org/10.1109/TNNLS.2021.3103958 doi: 10.1109/TNNLS.2021.3103958
    [19] H. L. Li, C. Hu, J. D. Cao, H. J. Jiang, A. Alsaedi, Quasi-projective and complete synchronization of fractional-order complex-valued neural networks with time delays, Neural Netw., 118 (2019), 102–109. https://doi.org/10.1016/j.neunet.2019.06.008 doi: 10.1016/j.neunet.2019.06.008
    [20] H. L. Li, H. J. Jiang, J. D. Cao, Global synchronization of fractional-order quaternion-valued neural networks with leakage and discrete delays, Neurocomputing, 385 (2020), 211–219. https://doi.org/10.1016/j.neucom.2019.12.018 doi: 10.1016/j.neucom.2019.12.018
    [21] Y. Sheng, H. Zhang, Z. G. Zeng, Stabilization of fuzzy memristive neural networks with mixed time delays, IEEE Trans. Fuzzy Syst., 26 (2017), 2591–2606. https://doi.org/10.1109/tfuzz.2017.2783899 doi: 10.1109/tfuzz.2017.2783899
    [22] Z. Wang, Y. Liu, M. Li, X. Liu, Stability analysis for stochastic Cohen-Grossberg neural networks with mixed time delays, IEEE Trans. Neural Netw., 17 (2006), 814–820. https://doi.org/10.1109/tnn.2006.872355 doi: 10.1109/tnn.2006.872355
    [23] P. Liu, M. X. Kong, Z. G. Zeng, Projective synchronization analysis of fractional-order neural networks with mixed time delays, IEEE Trans. Cybernet., 52 (2020), 6798–6808. https://doi.org/10.1109/tcyb.2020.3027755 doi: 10.1109/tcyb.2020.3027755
    [24] J. D. Cao, D. W. C. H, A general framework for global asymptotic stability analysis of delayed neural networks based on LMI approach, Chaos Solitons Fractals, 24 (2005), 1317–1329. https://doi.org/10.1016/j.chaos.2004.09.063 doi: 10.1016/j.chaos.2004.09.063
    [25] Z. Li, G. R. Chen, Global synchronization and asymptotic stability of complex dynamical networks, IEEE Trans. Circuits. Syst. II, 53 (2006), 28–33. https://doi.org/10.1109/TCSII.2005.854315 doi: 10.1109/TCSII.2005.854315
    [26] X. Li, S. Song, J. Wu, Exponential stability of nonlinear systems with delayed impulses and applications, IEEE Trans. Automat. Control, 64 (2019), 4024–4034. https://doi.org/10.1109/TAC.2019.2905271 doi: 10.1109/TAC.2019.2905271
    [27] Z. Wang, S. Lauria, J. Fang, X. Liu, Exponential stability of uncertain stochastic neural networks with mixed time-delays, Chaos Solitons Fractals, 32 (2007), 62–72. https://doi.org/10.1016/j.chaos.2005.10.061 doi: 10.1016/j.chaos.2005.10.061
    [28] L. Zhou, Z. Zhao, Asymptotic stability and polynomial stability of impulsive Cohen-Grossberg neural networks with multi-proportional delays, Neural Process. Lett., 51 (2020), 2607–2627. https://doi.org/10.1007/s11063-020-10209-8 doi: 10.1007/s11063-020-10209-8
    [29] W. Yao, C. Wang, Y. Sun, C. Zhou, Robust multimode function synchronization of memristive neural networks with parameter perturbations and time-varying delays, IEEE Trans. Syst. Man Cybernet. Syst., 52 (2022), 260–274. https://doi.org/10.1109/TSMC.2020.2997930 doi: 10.1109/TSMC.2020.2997930
    [30] L. G. Wan, Z. X. Liu, Multimode function multistability for Cohen-Grossberg neural networks with mixed time delays, ISA Trans., 129 (2022), 179–192. https://doi.org/10.1016/j.isatra.2021.11.046 doi: 10.1016/j.isatra.2021.11.046
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1164) PDF downloads(55) Cited by(0)

Figures and Tables

Figures(10)  /  Tables(2)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog