1.
Introduction
Essentially, Cohen-Grossberg neural networks (CGNNs) are a sort of artificial feedback neural networks, which means they exhibit common characteristics with other artificial neural networks in terms of information transfer and feedback mechanisms. CGNNs encompass highly adaptable neural network models (see, e.g., [1,2,3]), incorporating various types shaped like Hopfield neural networks and cellular neural networks, so the dynamic properties of multitudinous neural networks can be considered simultaneously when studying CGNNs. In addition, CGNNs offer extensive application prospects across various fields, including pattern recognition, classification, associative memory (see, e.g., [4,5,6]), etc. Stability is a prerequisite for the effectiveness of CGNNs in these applications. Thus, in order to get a larger capacity, CGNNs are designed with multiple stable equilibrium points. This has attracted many researchers to explore the multistability of CGNNs.
Actually, multistability analysis problems are typically more challenging than single-stability analysis, in which the phase space needs to be effectively partitioned into subsets containing equilibrium points according to different types of activation functions. By dividing the state space, the dynamics of multiple equilibrium points in each subset can be studied. Naturally, there are valuable works addressing this issue (see, e.g., [7,8,9,10,11]). In [9], Liu et al. investigated multistability in fractional-order recurrent neural networks by exploiting an activation function and nonsingular M-matrix, and they concluded that there exist ∏ni=1(2Ki+1) equilibria, among which, ∏ni=1(Ki+1) equilibria are local Mittag-Leffler stable. In [11], the authors explored multistability of CGNNs with non-monotonic activation functions and time-varying delays, and they found that one can obtain (2K+1)n equilibria for n-neuron CGNNs, with (K+1)n of them being exponentially stable. In addition, Wan and Liu [12] studied the multiple O(t−q) stability of fractional-order CGNNs with Gaussian activation functions.
To the best of our knowledge, it is agreed that the number of equilibrium points in multistability analysis of neural networks is intimately connected with the types of activation function. Some activation functions utilized widely in the existing literature are saturation function, Gaussian function, sigmoid function, Mexican-hat function [13], etc. Among these functions, a Gaussian function endows neural networks with greater modeling power and adaptability due to its properties of being nonmonotonic, bounded, symmetric, strongly nonlinear, and nonnegative. Additionally, research has conclusively shown that employing Gaussian activation functions in neural networks can accelerate learning and improve prediction (see, e.g., [14,15]). As such, it is indispensable to analyze the dynamical behavior of neural networks introducing Gaussian functions. In the literature related to Gaussian functions, Liu et al. [16] addressed the stable issue of recurrent neural networks with Gaussian activation functions by analyzing geometric properties of the Gaussian function. Their results concluded that there exist exactly 3k equilibrium points, and 2k equilibria are locally exponentially stable, while 3k−2k equilibria are unstable. In [17], the dynamical behaviors of multiple equilibria for fractional-order competitive neural networks with Gaussian activation functions were explored.
Due to the limited switching speeds and constrained signal propagation rates of neural amplifiers, it is imperative not to neglect time delays in neural networks (see, e.g., [18,19,20]). In fact, for some neurons, discrete-time delays offer a well-approximated and simplified circuit model for representing delay feedback systems. It is worth noting that neural networks typically exhibit spatial expansion since they consist of numerous parallel pathways with varying axon sizes and lengths. In such cases, the transmission of signals is not transient anymore and cannot be adequately characterized only by discrete-time delays. That is, it is reasonable to include distributed time delays in neural networks, which can reveal more realistically characteristics of neurons in the human brain (see, e.g., [21,22,23]). Therefore, we should be dedicated to analyzing CGNNs with mixed time delays, which is also highly necessary.
Nowadays, there are several frequently mentioned types of stability, such as asymptotic stability (see, e.g., [24,25]), exponential stability (see, e.g., [26,27]), logarithmic stability and polynomial stability [28]. In general, differences in stability indicate different convergence paradigms, allowing systems to satisfy the corresponding evolutionary requirements. Recently, a novel category of stability known as multimode function stability has been explored. Implementing this form of stability enables the simultaneous realization of the aforementioned types of stability. It is also revealed that multimode function stability can be employed in image processing and pattern recognition to construct neural network architectures with multiple feature extraction modes [29]. In [30], the authors presented multimode function multistability along with its specific formula. The state space was partitioned into ∏ni(2Hi+1) regions based on the positions of the zeros of boundary functions. Furthermore, through the application of the Lyapunov stability theorem and fixed point theorem, some associated criteria for multimode function multistability were obtained.
As indicated by the preceding analysis, many previous papers either analyzed only the multistability of CGNNs with/without time-varying delays and Gaussian activation functions, or solely examined the multimode function multistability of neural networks with mixed delays. There are few works on studying the multimode function multistability of CGNNs with Gaussian activation functions and mixed time delays. Consequently, we are prepared to address the multimode function multistability of CGNNs with Gaussian activation functions and mixed time delays. To be specific, the advantage of this paper can be summarized in these aspects. First, this paper will focus on specific activation functions, namely, Gaussian functions. Through the utilization of the geometrical properties of Gaussian functions, the state space can be partitioned into 3μ subspaces, where 0≤μ≤n. In contrast to the class of strictly nonlinear and monotonic activation functions considered in [30], the number of equilibrium points in this paper is explicitly explored. Second, multimode function multistability is discussed. Quite different from most of the existing literature concerning the multistability of CGNNs with Gaussian functions, the multimode function multistability results derived herein cover multiple asymptotic stability, multiple exponential stability, multiple polynomial stability and multiple logarithmic stability, so the results presented in this paper are more universal. Finally, relying on the geometric properties of Gaussian functions and a fixed point theorem, we deduce some sufficient conditions that guarantee the coexistence of precisely 3μ equilibria for an n-dimensional neural network, among which 2μ equilibrium points are multimode function stable and 3μ−2μ equilibrium points are unstable. The results obtained here serve as a supplement to the existing relevant multimode function multistability criteria.
Notations. In this article, for a given vector x=(x1,x2,...,xn)T∈Rn, define ‖x‖=max1≤i≤n(|xi|), and ˜τ=max1≤j≤n(˜τj,supt≥0τj(t)). Define C([−˜τ,0],D) as the Banach space of continuous functions ϕ: [−˜τ,0]⟶D⊂Rn. Let ‖ϕ‖˜τ=max1≤i≤n(sup−˜τ≤r≤0|ϕi(r)|).
2.
Preliminaries
We introduce CGNNs with Gaussian activation functions and mixed time delays as follows:
where x(t)=(x1(t),x2(t),...,xn(t))T∈Rn is state vector. ηi stands for the strength of self-inhibition. mi(⋅) is amplification. βij, γij and φij are connection weights. τj(⋅)≥0 represents time-varying delay, ˜τj in distributed delay term satisfies ˜τj>0. Ii denotes external input. fi(⋅) is a Gaussian function with the expression:
where (2.2) satisfies fi(r)∈(0,1], for r∈R,ci>0 represents the center and ρi denotes the width.
The initial value of (2.1) can be written as
Prior to the study, we need to recall some definitions and consider some assumptions which will be applied in subsequent content.
Assumption 2.1. There are positive constants ˊmi and ´Mi, such that
Definition 2.1. A constant vector x∗=(x∗1,...,x∗n)T is regarded as an equilibrium point of (2.1), if x∗ satisfies
Definition 2.2. Suppose that xi(t) is the solution of neural network (2.1) with initial condition (2.3). A given set Θ can be referred to as a positive invariant set given that, if initial condition ϕi(t0)∈Θ, then xi(t)∈Θ for all t≥t0.
Definition 2.3. Assume x∗∈D is an equilibrium point of (2.1), and D⊂Rn is a positively invariant set. Furthermore, suppose that ℏ(t) is a monotonically continuous and nondecreasing function for which ℏ(t)>0, for t≥0,ℏ(r)=ℏ(0),r∈[−˜τ,0], and limt→∞ℏ(t)=+∞. If
holds for any initial value ϕ(r)∈D,r∈[−˜τ,0], where ι>0 is a positive constant, then (2.1) is locally multimode function stable.
Calculating the first and second-order derivatives of activation function fi(r):
we can find f′i(r) has one root ri=ci via solving the equation f′i(r)=0. Analogously, by addressing the equation f″i(r)=0, we can gain two roots of f″i(r):
For r∈(−∞,C−i)∪(C+i,+∞), f″i(r)>0, with regard to r∈(C−i,C+i), f″i(r)<0, we can conclude that C−i and C+i are the maximum and minimum points of f′i(r), separately. The maximum and minimum values of f′i(r) are f′i(C−i)=√2exp(−1/2)/ρi,f′i(C+i)=−√2exp(−1/2)/ρi, respectively. For the convenience of discussion, we define δi=√2exp(−1/2)/ρi,i=1,2,...,n.
Since fi(r)∈(0,1] for all i=0,1,...,n, let
Define the boundary functions:
and simultaneously, we define
where ˉsi∈(ˇsi,ˆsi) is a constant. Then W−i(xi(t)), ˉWi(xi(t)), and W+i(xi(t)) are vertical shifts toward each other.
Let N={1,2,...,n}. According to the specific values of the parameters ηi and βii, define
Lemma 2.1 ([16]). If i∈L1 or L3, then there are pi,qi such that ˉW′i(pi)=ˉWi′(qi)=0, where pi<R−i<qi<ci, if i∈L2 or L4, then ˉWi′(r)<0 for r∈R.
For the sake of discussion, the subsequent subsets of L1 and L3 are considered:
Lemma 2.2 ([16]). If i∈L11∪L13, then there exist three zeros ˇui,ˇvi,ˇλi for W−i(r) and three zeros ˆui,ˆvi,ˆλi for W+i(r), satisfying ˇui<ˆui<pi<ˆvi<ˇvi<qi<ˇλi<ˆλi.
If i∈L21∪L23, then there exists one zero ˇoi for W−i(r), and one zero ˆoi for W+i(r), satisfying ˇoi<ˆoi<pi.
If i∈L31∪L33, then there exists one zero ˇoi for W−i(r), and one zero ˆoi for W+i(r), satisfying qi<ˇoi<ˆoi.
If i∈L2∪L4, then there exists one zero ˇoi for W−i(r), and one zero ˆoi for W+i(r), satisfying ˇoi<ˆoi.
3.
Main results
3.1. Equilibrium points
In what follows, the number of equilibrium points of (2.1) is explored. Let card Q represent the cardinality of a given set Q. Define μ = card(L11∪L13), k = card (L21∪L31∪L2∪L23∪L33∪L4), and let
The following assumption is required so as to ascertain the number of equilibrium points of (2.1).
Assumption 3.1. k+μ=n.
Consequently, it can be seen that there exist 3μ elements in Θ.
Theorem 3.1. Suppose Assumption 3.1 holds. Further assume that
where i∈N, and Fi is given in Table 1. Then, neural network (2.1) has accurately 3μ equilibria in Rn.
Proof. We first demonstrate the existence of equilibrium points of (2.1) for any Θ(1)=∏ni=1li=∏ni=1[di,gi]∈Θ.
With regard to any given x=(x1,x2,...,xn)T and index i∈N, define the following function:
Comparing Wi(r) with W+i(r),W−i(r), we can get that W−i(r)≤Wi(r)≤W+i(r) for r∈[di,gi]. Then, two cases will be considered.
Case 1: when li=[ˆvi,ˇvi], we can obtain
Case 2: when li≠[ˆvi,ˇvi], we get
Taken together, Wi(di)Wi(gi)≤0, whereupon there exists a ˉxi∈[di,gi] satisfying Wi(ˉxi)=0 for i=1,2,...,n. Define a continuous mapping Ξ : Θ(1)→Θ(1), Ξ(x1,x2,...,xn)=(ˉx1,ˉx2,...,ˉxn)T. By virtue of a fixed point theorem, we can assert the existence of a fixed point x∗=(x∗1,x∗2,...,x∗n)T of Ξ, which also serves as an equilibrium point for (2.1).
Following that, we are prepared to certify the uniqueness of equilibrium points in Θ(1). For any x,y∈Θ(1), hypothesize that Ξ(x)=x∗,Ξ(y)=y∗ and x∗,y∗ are both roots of Wi(r).
Hence,
Subtracting (3.3) from (3.2), it follows that
where min(x∗i,y∗i)≤ξ∗i≤max(x∗i,y∗i). In the following, eight situations are discussed.
Case 1: i∈L11.
If ξ∗i∈[ˇui,ˆui], we have f′i(ξ∗i)≤ηiβii and 0<f′i(ˇui)≤f′i(ξ∗i)≤f′i(ˆui); hence
If ξ∗i∈[ˆvi,ˇvi], we can get f′i(ξ∗i)≥ηiβii, and 0<min{f′i(ˇvi),f′i(ˆvi)}≤f′i(ξ∗i)≤δi, then
If ξ∗i∈[ˇλi,ˆλi], we can obtain f′i(ξ∗i)≤ηiβii, and −δi≤f′i(ξ∗i)≤max{f′i(ˇλi),f′i(ˆλi)}, then
Case 2: i∈L21. In this case, ξ∗i∈[ˇoi,ˆoi], we can know f′i(ξ∗i)≤ηiβii, and 0<f′i(ˇoi)≤f′i(ξ∗i)≤f′i(ˆoi). Hence,
Case 3: i∈L31. In this case, f′i(ξ∗i)≤ηiβii and f′i(ξ∗i)≤max{f′i(ˇoi),f′i(ˆoi)}. Hence,
Case 4: i∈L2. In this case, ξ∗i∈[ˇoi,ˆoi],f′i(ξ∗i)≤δi<ηiβii, so we can get
Case 5: i∈L13.
If ξ∗i∈[ˇui,ˆui], we have βii<0, f′i(ξ∗i)≥ηiβii, and min{f′i(ˇui),f′i(ˆui)}≤f′i(ξ∗i)≤δi. Hence,
If ξ∗i∈[ˆvi,ˇvi], we can get βii<0,f′i(ξ∗i)≤ηiβii, and −δi≤f′i(ξ∗i)≤max{f′i(ˇvi),f′i(ˆvi)}<0. Then,
If ξ∗i∈[ˇλi,ˆλi], we can obtain βii<0,f′i(ξ∗i)≥ηiβii, and f′i(ˇλi)≤f′i(ξ∗i)≤f′i(ˆλi)<0. Then,
Case 6: i∈L23. In this case, βii<0,ξ∗i∈[ˇoi,ˆoi]. We can know f′i(ξ∗i)≥ηiβii, and min{f′i(ˇoi),f′i(ˆoi)}≤f′i(ξ∗i)≤δi. Hence,
Case 7: i∈L33. In this case, βii<0,f′i(ξ∗i)≥ηiβii and f′i(ˇoi)≤f′i(ξ∗i)≤f′i(ˆoi)<0. Hence,
Case 8: i∈L4. In this case, βii<0,ξ∗i∈[ˇoi,ˆoi], and f′i(ξ∗i)≥−δi>ηiβii, so we can get
Based on the above discussion,
where Δ=max1≤i≤n(∑nj=1,j≠iδj|βij|+∑nj=1δj|γij|+∑nj=1δj|φij|˜τjFi), and Fi is described in Table 1.
Recalling (3.1), Δ<1. Consequently, Ξ is a contraction mapping in Θ(1)∈Θ. Hence, a unique equilibrium point exists within Θ(1). From Assumption 3.1, the number of elements of Θ is 3μ, so the neural network (2.1) has exactly 3μ unique equilibrium points. □
3.2. Multimode function multistability
From the discussion in the preceding subsection, we have obtained that there are exactly 3μ equilibrium points. In this subsection, we will inquire into the multimode function stability of 3μ equilibria for CGNNs with Gaussian activation functions and mixed time delays. For this purpose, the invariant set needs to be specified.
Define
where 0<ϱ<min1≤i≤n(ϱi), and define
Let Θϱ(1)=∏ni=1[ˇ∂i−ϱ,ˆ∂i+ϱ] and ˇΘϱ(1)=∏ni=1[ˇϵi−ϱ,ˆϵi+ϱ] be elements of Θϱi and ˇΘϱi, respectively.
Remark 3.1. Under the condition of Theorem 3.1, it is observed that there are exactly 2μ elements in Θϱi and 3μ−2μ elements in ˇΘϱi.
Theorem 3.2. Suppose Assumption 3.1 holds. Then, Θϱ(1)∈Θϱi is a positive invariant set for initial state of (2.1) with ϕi(t0)∈Θϱ(1).
Proof. For any initial value ϕi(s)∈C([−˜τ,0],D), if ϕi(t0)∈[ˇ∂i−ϱ,ˆ∂i+ϱ], we require that the corresponding solution xi(t) of (2.1) meets xi(t)∈[ˇ∂i−ϱ,ˆ∂i+ϱ] for all t≥t0. Otherwise, there must exist an index i, t2>t1>t0, and ω which is an adequately small positive number, such that
On the other hand, it is not difficult to observe that for any element [ˇ∂i−ϱ,ˆ∂i+ϱ]∈Θϱi, W+i(ˆ∂i+ϱ)<0, then
This is in contradiction to x′i(t1)≥0. Then, xi(t)≤ˆ∂i+ϱ. Likewise, we can prove that xi(t)≥ˇ∂i−ϱ, for t≥t0 and i=1,2,...,n. Accordingly, each set in Θϱi is a positive invariant set. □
Remark 3.2. From Remark 3.1, there exist 2μ elements in Θϱi, so the number of positively invariant sets is 2μ for initial state ϕi(t0)∈Θϱ(1) of neural network (2.1).
Below, we will investigate whether the equilibria located in the positive invariant sets are multimode function stable for neural network (2.1). For this reason, we need to introduce the following assumption and lemma.
Assumption 3.2. ℏ(t) is a monotonically continuous and non-decreasing function. It satisfies ℏ(t)>0 for t≥0, and ℏ(r)=ℏ(0),r∈[−˜τ,0]. Further suppose
holds, where P(⋅) is a monotonically nondecreasing nonnegative function, and ε>0 is a constant.
Hence, it is easy to obtain that dℏ(t)dt/ℏ(t)≤ε.
Lemma 3.1 [26]. Suppose that Assumption 3.2 holds. Then,
where ζ∈[−˜τ,0] is a constant.
Let x∗∈Θϱ(1) be an equilibrium point of (2.1). Define
where x(t)=(x1(t),x2(t),...,xn(t))T is the solution of neural network (2.1) and its initial condition ϕ(r)∈Θϱ(1),r∈[−˜τ,0].
Thereupon,
For convenience, let Fi(t)=fi(υi(t)+x∗i)−fi(x∗i). Hence, from (3.5)
Consider the following expression:
When i∈L11, if li=[ˇui−ϱ,ˆui+ϱ],
and if li=[ˇλi−ϱ,ˆλi+ϱ],
When i∈L21, li=[ˇoi−ϱ,ˆoi+ϱ],
When i∈L31, li=[ˇoi−ϱ,ˆoi+ϱ],
When i∈L2∪L4, li=[ˇoi−ϱ,ˆoi+ϱ], −δi≤Fi(t)υi(t)≤δi.
When i∈L13, if li=[ˇui−ϱ,ˆui+ϱ],
if li=[ˇλi−ϱ,ˆλi+ϱ],
When i∈L23, li=[ˇoi−ϱ,ˆoi+ϱ],
When i∈L33, li=[ˇoi−ϱ,ˆoi+ϱ],
Taking into account these cases, we can get
where Ψi is described in Table 2.
Theorem 3.3. Assume the conditions of Assumptions 2.1–3.2 are satisfied. Further suppose that there are n positive constants σ1,σ2,...,σn such that
holds for i=1,2,...,n. Then, there are 2μ equilibria which are locally multimode function stable, and 3μ−2μ equilibrium points are unstable in (2.1).
Proof. Based on the analysis in the previous subsection, there exist exactly 2μ equilibria in Θϱi. Our objective now is simply to prove 2μ equilibria are multimode function stable in Θϱi, while other equilibria in ˇΘϱi are unstable.
Take
and there must be some κ∈{1,2,...,n} such that ϖ(t)=|υκ(t)|σκ.
Under (3.6), we get
Note that
and
Combining with the above calculation and (3.7), we can obtain
By invoking (3.9),
From Lemma 3.1,
Hence,
In the following, it is demanded that
and if not, there must be certain T∗>0 satisfying
In other words,
which means that
From the above discussion and (3.10), we can derive
The result contradicts with (3.12). Hence, (3.11) holds, which implies
where ι=ˆσℏ(0)ˇσ, ˆσ=max1≤i≤n(σi), ˇσ=min1≤i≤n(σi). The proof is accomplished. □
4.
Numerical examples
We offer two numerical examples of 2-dimensional CGNNs with Gaussian activation functions and mixed time delays to show the efficacy of theoretical results in this subsection.
Example 4.1. Consider the 2-dimensional CGNNs with Gaussian activation functions and mixed time delays presented below:
where Gaussian activation functions f1(r)=f2(r)=exp(−r2), τ1(t)=1.5+cos(t),τ2(t)=2t1+t,˜τ1=1.1,˜τ2=1.2.
It can be gained apparently that
Since m1(x1(t))=4+sin(x1(t))∈[3,5], m2(x2(t))=5+sin(x2(t))∈[4,6], Assumption 2.1 is met. Moreover ˜τ=2.5, ˇs1=−2.53, ˆs1=−2.276, ˇs2=−3.01, ˆs2=−2.829. Hence, the boundary functions are as follows:
where the graphs of these boundary functions are described as Figures 1 and 2.
Additionally,
which demonstrates that 1∈L1,2∈L2.
By means of further calculations, we can obtain that μ=1. ˇu1≈−2.5270, ˆu1≈−2.2570, ˇv1≈−0.9550, ˆv1≈−0.8480, ˇλ1≈0.3629, ˆλ1≈0.4364, p1≈−1.5261, q1≈−0.1441. Also, W+1(p1)=−0.4285<0,W−1(q1)=0.8463>0. Therefore, 1∈L11.
Furthermore, ˇo2≈−3.0699, ˆo2≈−2.8286. f′1(ˇv1)=0.6621, f′1(ˆv1)=0.4270, f′1(ˆu1)=0.1131, f′1(ˇλ1)=−1.2914, f′1(ˆλ1)=−1.0235. By computation, F1≈0.4091, F2≈0.2564.
Then,
are met. Hence, depending on Theorem 3.1, there are 3 equilibria for (4.1). By applying MATLAB, we find that these equilibrium points are (-0.8174, -2.4286), (-2.4930, -2.4979), and (0.3674, -2.3795), respectively.
Moreover, ϱ1=min(p1−ˆu1,ˇλ1−q1)≈0.5070, ϱ2=1, so let ϱ=0.1. From Theorem 3.2, there exist two positively invariant sets, which are [−2.627,−2.156]×[−3.1699,−2.7286], and [0.3529,0.5364]×[−3.1699,−2.7286].
Next, we need to check out the stability condition (3.8) in Theorem 3.3. Select σ1=σ2=1, Ψ1=max(f′1(ˆui+ϱ),f′1(ˇλ1−ϱ)≈0,f′1(ˆλ1+ϱ))=0.1589, Ψ2≈0 Now, we let ℏ(t) be an exponential function with the expression ℏ(t)=exp(0.06t), so ε=0.06. By further calculating
The result shows that (3.8) holds, that is, the equilibrium points (−2.4930,−2.4979) and (0.3674,−2.3795) are multimode function stable, whereas the equilibrium point (−0.8174,−2.4286) is unstable. The trajectory behavior of (4.1) and the equilibrium points are illustrated by Figures 3–5.
Example 4.2. Consider the 2-dimensional CGNNs with Gaussian activation functions and mixed time delays presented below:
where Gaussian activation functions f1(r)=f2(r)=exp(−r2), τ1(t)=1+0.5sin(t),τ2(t)=t1+t,˜τ1=1.15,˜τ2=1.22.
It can be gained apparently that
Since m1(x1(t))=2+sin(x1(t))∈[1,3], m2(x2(t))=4+cos(x2(t))∈[3,5], Assumption 2.1 is met. Moreover ˜τ=1.5, ˇs1=−4.5, ˆs1=−3.7756, ˇs2=−5, ˆs2=−4.217. Hence, the boundary functions are as follows:
where the graphs of these boundary functions are portrayed in Figures 6 and 7.
Additionally,
which implies that 1∈L11,2∈L11, and μ=2.
By means of further calculations, ˇu1≈−4.5, ˆu1≈−3.7756, ˆv1≈−0.8821, ˇv1≈−0.7135, ˇλ1≈0.4841, ˆλ1≈0.6031, p1≈−1.7400, q1≈−0.1230. Also, ˇu2≈−5, ˆu2≈−4.217, ˆv2≈−0.9039,ˇv2≈−0.7543, ˇλ2≈0.5489, ˆλ2≈0.6566, p2≈−1.8380, q2≈−0.02.
Furthermore, f′1(ˇv1)≈0.8577, f′1(ˆv1)≈0.8103, f′1(ˆu1)≈0, f′1(ˇλ1)≈−0.7659, f′1(ˆλ1)≈−0.8384, f′2(ˇv1)≈0.7986, f′2(ˆv1)≈0.8540, f′2(ˆu1)≈0, f′2(ˇλ1)≈−0.8122, f′2(ˆλ1)≈−0.8533. By computation, F1=1, F2=1.5. Then,
are met. Hence, according to Theorem 3.1, there are 32=9 equilibrium points for (4.2). Moreover, ϱ1=min(p1−ˆu1,ˇλ1−q1)≈0.6071, ϱ2=min(p2−ˆu2,ˇλ2−q2)≈0.5689, so let ϱ=0.1. From Theorem 3.2, there exist four positively invariant sets, which are [−4.6,−3.6756]×[−5.1,−4217], [0.3841,0.7031]×[−5.1,−4217], [−4.6,−3.6756]×[0.4489,0.7566], [0.3841,0.7031]×[0.4489,0.7566]. By applying MATLAB, the stable equilibrium points are (-4.441, -2.639), (0.498, -2.355), (0.5126, 0.7593), and (-4.193, 0.6693), respectively.
Next, we need to check out the stability condition (3.8) of Theorem 3.3. Select σ1=σ2=1, Ψ1=max(f′1(ˆui+ϱ),f′1(ˇλ1−ϱ),f′1(ˆλ1+ϱ)), Ψ2=0.8578. Now, we let ℏ(t) be a logarithmic function, where P(t)=ln(t+8.0101), ε=0.5, so ℏ(t)=0.5−1(t+8.0101)ln(t+8.0101). By further calculating
The result shows that (3.8) holds, that is, equilibrium points (-4.441, -2.639), (0.498, -2.355), (0.5126, 0.7593), and (-4.193, 0.6693) are multimode function stable. The trajectory behavior of (4.2) as well as the equilibrium points in this case are portrayed by Figures 8–10.
5.
Conclusions
In this paper, we probe into multimode function multistability of CGNNs with Gaussian activation functions and mixed time delays. Specifically, on account of the special geometric properties of Gaussian functions, the state space of an n-dimensional CGNNs can be divided into 3μ subspaces (0≤μ≤n), further exploiting Brouwer's fixed point theorem and contraction mapping, we conclude that there exists an equilibrium point for each subspace, that is, there are exactly 3μ equilibria for CGNNs with Gaussian activation functions and mixed time delays. Subsequently, by analyzing the invariance sets, it is deduced that 2μ equilibrium points are multimode function stable, while 3μ−2μ equilibrium points are unstable. This work extends the existing results concerning the multistability of multimode functions, offering effective assistance in the dynamic analysis of CGNNs with specific activation functions and mixed time delays.
Use of AI tools declaration
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
Conflict of interest
The authors declare that there are no conflicts of interest.