
This article addresses the robust dissipativity and passivity problems for a class of Markovian switching complex-valued neural networks with probabilistic time-varying delay and parameter uncertainties. The main objective of this article is to study the proposed problem from a new perspective, in which the relevant transition rate information is partially unknown and the considered delay is characterized by a series of random variables obeying bernoulli distribution. Moreover, the involved parameter uncertainties are considered to be mode-dependent and norm-bounded. Utilizing the generalized Itˆo's formula under the complex version, the stochastic analysis techniques and the robust analysis approach, the (M,N,W)-dissipativity and passivity are ensured by means of complex matrix inequalities, which are mode-delay-dependent. Finally, two simulation examples are provided to verify the effectiveness of the proposed results.
Citation: Qiang Li, Weiqiang Gong, Linzhong Zhang, Kai Wang. Robust dissipativity and passivity of stochastic Markovian switching CVNNs with partly unknown transition rates and probabilistic time-varying delay[J]. AIMS Mathematics, 2022, 7(10): 19458-19480. doi: 10.3934/math.20221068
[1] | Wenlong Xue, Yufeng Tian, Zhenghong Jin . A novel nonzero functional method to extended dissipativity analysis for neural networks with Markovian jumps. AIMS Mathematics, 2024, 9(7): 19049-19067. doi: 10.3934/math.2024927 |
[2] | Miao Zhang, Bole Li, Weiqiang Gong, Shuo Ma, Qiang Li . Matrix measure-based exponential stability and synchronization of Markovian jumping QVNNs with time-varying delays and delayed impulses. AIMS Mathematics, 2024, 9(12): 33930-33955. doi: 10.3934/math.20241618 |
[3] | Chengqiang Wang, Xiangqing Zhao, Yang Wang . Finite-time stochastic synchronization of fuzzy bi-directional associative memory neural networks with Markovian switching and mixed time delays via intermittent quantized control. AIMS Mathematics, 2023, 8(2): 4098-4125. doi: 10.3934/math.2023204 |
[4] | Hui Sun, Zhongyang Sun, Ya Huang . Equilibrium investment and risk control for an insurer with non-Markovian regime-switching and no-shorting constraints. AIMS Mathematics, 2020, 5(6): 6996-7013. doi: 10.3934/math.2020449 |
[5] | Li Liu, Yinfang Song, Hong Yu, Gang Zhang . Almost sure exponential synchronization of multilayer complex networks with Markovian switching via aperiodically intermittent discrete observa- tion noise. AIMS Mathematics, 2024, 9(10): 28828-28849. doi: 10.3934/math.20241399 |
[6] | Zhengqi Zhang, Huaiqin Wu . Cluster synchronization in finite/fixed time for semi-Markovian switching T-S fuzzy complex dynamical networks with discontinuous dynamic nodes. AIMS Mathematics, 2022, 7(7): 11942-11971. doi: 10.3934/math.2022666 |
[7] | Zirui Zhao, Wenjuan Lin . Extended dissipative analysis for memristive neural networks with two-delay components via a generalized delay-product-type Lyapunov-Krasovskii functional. AIMS Mathematics, 2023, 8(12): 30777-30789. doi: 10.3934/math.20231573 |
[8] | Saravanan Shanmugam, R. Vadivel, Mohamed Rhaima, Hamza Ghoudi . Improved results on an extended dissipative analysis of neural networks with additive time-varying delays using auxiliary function-based integral inequalities. AIMS Mathematics, 2023, 8(9): 21221-21245. doi: 10.3934/math.20231082 |
[9] | Yakufu Kasimu, Gulijiamali Maimaitiaili . Non-fragile H∞ filter design for uncertain neutral Markovian jump systems with time-varying delays. AIMS Mathematics, 2024, 9(6): 15559-15583. doi: 10.3934/math.2024752 |
[10] | Han Geng, Huasheng Zhang . A new H∞ control method of switched nonlinear systems with persistent dwell time: H∞ fuzzy control criterion with convergence rate constraints. AIMS Mathematics, 2024, 9(9): 26092-26113. doi: 10.3934/math.20241275 |
This article addresses the robust dissipativity and passivity problems for a class of Markovian switching complex-valued neural networks with probabilistic time-varying delay and parameter uncertainties. The main objective of this article is to study the proposed problem from a new perspective, in which the relevant transition rate information is partially unknown and the considered delay is characterized by a series of random variables obeying bernoulli distribution. Moreover, the involved parameter uncertainties are considered to be mode-dependent and norm-bounded. Utilizing the generalized Itˆo's formula under the complex version, the stochastic analysis techniques and the robust analysis approach, the (M,N,W)-dissipativity and passivity are ensured by means of complex matrix inequalities, which are mode-delay-dependent. Finally, two simulation examples are provided to verify the effectiveness of the proposed results.
Over the past several decades, dynamical performances of complex-valued neural networks (CVNNs) have drawn a lot of sensitive attention owing to their broad application prospect, such as signal processing, associative memory, pattern recognition, engineering optimization [1,2,3] and the references therein. CVNNs can effectively solve not only the real-valued information problems but also the complex-valued ones under complex plane condition. In addition to this, CVNNs have the strong advantage in comparison with the real-valued neural networks (RVNNs), which means that CVNNs are much more complicated. However, it will be no longer applicable [4] if the complex-valued activation actions are chosen be the similar real-valued ones. In addition, compared with RVNNs, CVNNs can solve a wider range of problems, including the symmetry detection and exclusion XOR problems [5,6]. In view of these points, it will be an important thing to explore dynamical behaviors of CVNNs. In [7], based on generalized {ξ,∞}-norm, the finite time anti-synchronization issue of the bounded asynchronous delayed master-slave coupled CVNNs has been addressed. By resorting to the matrix measure approach, the global exponential stability of delayed CVNNs has been reported in [8]. The global asymptotic stability problem of CVNNs with mixed delays has been proposed in [9].
Owing to the influence of objective factors (communication time and limited speed), time-varying delay usually occurs during the process of neuron information transmission, which may lead to some unexpected performance. Up till now, a mass of delays have been proposed, for instance, leakage delay, distributed delay, proportional delay, probabilistic time-varying delay, and so on. Meanwhile, a large amount of researches have been done under different delays. For example, the issue of global exponential stability of CVNNs with asynchronous time delays has been investigated in [10]. The global power-rate synchronization of chaotic networks with proportional delay has been tackled via impulsive control in [11].
Dissipativity was introduced in [12] and generalized in [13]. From an theoretical engineering point of view, dissipative theory provides a fundamental frame for analysing different control systems. From the perspective of energy, the passivity, keeping system internally stable as the main property, has been firstly presented in the analysis of circuit [14]. Meanwhile, in the general control field, dissipativity/passivity has been utilized as an essential tool. Based on refined Jensen inequalities, the issue of dissipativity for stochastic delayed memristive networks has been explored in [15]. By quadratic convex combination method, the issue of global dissipativity/passivity for delayed T-S fuzzy general neural networks has been tackled in [16]. However, there are only few results about the dissipativity/passivity issue of CVNNs with probabilistic time-varying delays [17], which is one of the main motivations why we do our research.
Random parameter uncertainties, usually existing in complex system, can lead to stochastic perturbations, which is one of factors causing poor performances. Therefore, when investigating dynamical behaviors of complex systems, both parameter uncertainties and stochastic perturbations should be considered. Abundant corresponding achievements have been listed [18]. Nevertheless, the aforementioned literatures only considered the Brownian motion but ignored the switching behaviors. To better describe that switching phenomena, Markovian switching mechanism has been proposed and a variety of significative achievements have been achieved [19,20]. Such as, the issue of global dissipativity/passivity for discrete-time stochastic Markovian switching Cohen-Grossberg systems has been studied [21]. In [22], the robust passivity problem of stochastic Markovian switching systems with multiplicative noise has been investigated. It should be noticed that the transition rates can directly influence dynamics of the Markovian switching systems during the jumping process. In most of the aforementioned literatures, it is assumed that the considered Markovian process is precise. Nevertheless, owing to the existence of environmental noises, delay variation of time delay or packet dropouts, it exists troublesome to measure and get the accurate transition rate information, which directly leads to incomplete transition rates. Recently, in order to analyze different type of uncertainties found in transition rates [23,24], an extension has been addressed to deal with transition rates with uncertainties. For instance, the stability issue of delayed Markovian networks has been investigated [25], where transition rate information is partly known. The exponential stability issue of mixed delayed impulsive Markovian jump networks with general incomplete transition rates has been studied in [26]. Regardless of these recent developments, up till now, when simultaneously consider all factors, including the stochastic disturbances, Markovian switching with partly known transition rates, probabilistic time-varying delay and uncertain parameters, there is no relevant results on dissipativity/passivity problem for complex-valued networks in the complex domain, which becomes the most important motivation to investigate this research.
In response to the statements given above, the main goal is to study the robust dissipativity and passivity issues of Markovian switching CVNNs, which involve stochastic disturbance, probabilistic time-varying delay and partly unknown transition rates. Here, the main novelties are primarily summarized as follows. (1) Partly unknown transition rates are considered for the first attempt to address robust passivity and dissipativity problems for stochastic Markovian switching CVNNs with probabilistic time-varying delay and norm-bounded uncertainties. (2) A stochastic variable in time-varying delay is introduced to analyse dissipativity and passivity issues for considered delayed complex-valued neural networks, which satisfies the Bernoulli random binary distribution. (3) By taking advantage of the robust analysis technique, stochastic analysis approach, Lyapunov stability theory and the generalised Itˆo's formula, sufficient criteria on (M,N,W)-dissipativity/passivity are obtained with the intuitionistic form of complex matrix inequalities, which are delay-mode-dependent, (4) Simulation results are given, which could clearly show that the stochastic factors, i.e., the Markovian process and the Brownian motion, have significant effect on the dissipativity/passivity performance index.
In this paper, the remainder is outlined as follows. Section 2 shows the considered model description and some necessary preliminaries. Section 3 derives the robust dissipativity and passivity criteria for the stochastic delayed Markovian switching CVNNs with probabilistic time-varying delay and partly known transition rates through utilizing the general Lyapunov functional method in the complex domain. Section 4 gives two illustrative numerical simulations to verify the viability of the presented results. In the end, the conclusion is given in Section 5.
Notations: Rn and Cn denote, respectively, n-dimensional real vectors and n-dimensional complex vectors. Rm×n and Cm×n are m×n real and complex matrices. I denotes the identity matrix with appropriate dimensions. The (Ω,X,{Xt}t≥0,W) is the complete probability space, in which Xt is monotonically right continuity, and X0 includes whole W-null sets. The superscript 'T' stands for the matrix transposition. The superscript 'H' denotes the matrix complex conjugate transpose. →i denotes the imaginary unit. '∗' denotes the elements involved by symmetry in a matrix. col(Ai)ni=1 refers to (AT1,AT2,⋯,ATn)T. E{⋅} means the mathematical expectation.
Consider the following stochastic Markovian switching CVNNs with probabilistic time-varying delay and uncertain parameters:
{dx(t)=[−(C(s(t))+ΔC(s(t)))x(t)+(A(s(t))+ΔA(s(t)))f(x(t))+(B(s(t))+ΔB(s(t)))g(x(t−τ(t)))+ℏ(t)]dt+h(t,x(t),x(t−τ(t)))dω(t),y(t)=f(x(t)),t≥0, | (2.1) |
where x(t)=col(xι)nι=1∈Cn means the state vector of the network at time t, which involves n nodes. C(s(t))=diag{cι(s(t))}nι=1∈Rn×n stands for the self-feedback weight matrix with every entry cι(s(t))>0, A(s(t))=(aιm(s(t)))n×n and B(s(t))=(bιm(s(t)))n×n denote, respectively, the connection weight matrix and the delayed connection weight matrix and they belong to Cn×n. f(x(t))=col(fι(xι(t)))nι=1:Cn→Rn and g(x(t))=col(gι(xι(t)))nι=1:Cn→Cn stand for, respectively, the neuron activation function without and with time delay. ℏ(t)=col(ℏι(t)))T∈Rn and y(t)=col(yι(t)))T∈Rn stand for, respectively, the external input vector and the output vector. h(t,x(t),x(t−τ(t))):R×Cn×Cn→Cn×n is the noise density function. ω(t) stands for the n-dimensional Brownian motion, which is defined on (Ω,X,{Xt}t>0,W). τ(t) is called as time-varying probabilistic delay, which often satisfies the following equation:
W{τ(t)=τ1(t)}=η,W{τ(t)=τ2(t)}=1−η,∀t>0, | (2.2) |
in which τ1(t)∈[τ1,˜τ] and τ2(t)∈(˜τ,τ2] with τ1≤˜τ≤τ2 being known positive numbers. Moreover, ˙τ1(t)≤μ1 and ˙τ2(t)≤μ2.
The stochastic process {s(t),t≥0}, taking valid values in a set S≜{1,2,…,N}, denotes a continuous-time Markov process, where the transition rate matrix Π≜[ϖab]N×N is defined in the form of probability type as follows:
W{s(t+θ)=b|s(t)=a}={ϖabθ+o(θ),a≠b,1+ϖabθ+o(θ),a=b, |
where θ>0, when a≠b, ϖab≥0 refers to the transition rate which jumps mode a at time t to mode b at time t+θ, limθ→0(o(θ)/θ)=0, and ϖaa=−∑Nb=1,b≠aϖab. Obviously, the well-known fact is that transition rates under the Markov process can directly influence the behavior of the Markovian switching systems, it is further assumed that some elements of the transition rates are partly available. Next, for every a∈S, let S≜Sauk∪Sauc with Sauk≜{b:ϖabis unknwon} and Sauc≜{b:ϖabis uncertain}. Moreover, if Sa2≠∅, Sa1 can be expressed as
Sa1={Ka1,Ka2,…,Kam}, |
in which m is a positive integer belonging to {1,…,N−2}. In transition rate matrix Π, Kas(s∈{1,2,…,m}) denotes the sth foreknown element in the ath row. For further facilitate analysis, when s(t)=a, the presented matrices C(s(t)), A(s(t)), B(s(t)), ΔC(s(t)), ΔA(s(t)), and ΔB(s(t)) are, respectively, simplified as Ca, Aa, Ba, ΔCa, ΔAa, and ΔBa.
The mode-dependent parameter uncertainties ΔCa∈Rn×n, ΔAa∈Cn×n, and ΔBa∈Cn×n are assumed to satisfy
[ΔCaΔAaΔBa]=DaW(t)[H1aH2aH3a], | (2.3) |
in which Da and H1a are known real constant matrices, H2a and H3a are known complex matrices, W(t) denotes a real unknown matrix function satisfying
WT(t)W(t)≤I. | (2.4) |
To simplify further analysis, η(t), a Bernoulli distributed white sequence, is introduced as follows:
η(t)=1,whenτ(t)=τ1(t);η(t)=0,whenτ(t)=τ2(t). |
Combined with the above analysis, we rewrite the network (2.1) as
dx(t)=[−(C(s(t))+ΔC(s(t)))x(t)+(A(s(t))+ΔA(s(t)))f(x(t))+η(t)(B(s(t))+ΔB(s(t)))×g(x(t−τ1(t)))]dt+(1−η(t))(B(s(t))+ΔB(s(t)))g(u(t−τ2(t)))dt+ℏ(t)dt+η(t)h(t,x(t),x(t−τ1(t)))dω(t)+(1−η(t))h(t,x(t),x(t−τ2(t)))dω(t). | (2.5) |
Remark 2.1. It is worth noticing that random variable η(t) owns the corresponding statistical properties with W{η(t)=1}=E{η(t)}=˜η, W{η(t)=0}=1−E{η(t)}=1−˜η, E{(1−η(t))2}=1−˜η, E{η2(t)}=˜η and E{η(t)(1−η(t))}=0. Moreover, η(t) is independent with ω(t) and s(t).
The initial value of system (2.1) is defined as
x(e)=ζ(e),−τ2≤e≤0, | (2.6) |
in which ζ(e)=(ζ1(e),…,ζn(e))T∈Cn belongs to L2X0([−τ2,0],Cn). In addition, L2X0([−τ2,0],Cn) stands for the all elements of all X0-measurable random variable, which is C([−τ2,0],Cn)-valued and sup−τ2≤e≤0E{‖. Moreover, it should be pointed out that \zeta(\cdot) is independent with the Brownian motion \omega(\cdot) , Markov process s(\cdot) and random variable \eta(t) .
For further discussion, the given nonlinear activation functions satisfy the following conditions which will be used later.
Assumption 2.1. The considered activation functions f_\iota(\cdot), \; g_\iota(\cdot)\; (\iota = 1, 2, \ldots, n) satisfy the Lipschitz condition and f_\iota(0) = g_\iota(0) = 0 , i.e., there exist positive constants \sigma_\iota, \; \rho_\iota such that
\begin{align*} |f_\iota(\varepsilon_1)-f_\iota(\varepsilon_2)|\leq \sigma_\iota |\varepsilon_1-\varepsilon_2|,\; \; |g_\iota(\varepsilon_1)-g_\iota(\varepsilon_2)|\leq \rho_\iota |\varepsilon_1-\varepsilon_2|,\; \; \forall \varepsilon_1, \varepsilon_2\in\mathbb{C}. \end{align*} |
Assumption 2.2. There exist positive semi-definite Hermitian matrices V_1 and V_2 of appropriate dimensions satisfying the inequality below:
\begin{align*} h^H(t,\epsilon,\tilde{\epsilon})h(t,\epsilon,\tilde{\epsilon})\leq \epsilon^HV_1\epsilon+ \tilde{\epsilon}^HV_2\tilde{\epsilon},\; \; \; \forall \epsilon, \tilde{\epsilon}\in\mathbb{C}^n. \end{align*} |
Remark 2.2. It's worth noting that, the nonlinear functions in Assumption 2.1 are usually looked upon as the expansion of the real-valued ones with the Lipschitz condition. Moreover, the existing literatures concerning CVNNs are adopted to decompose the considered CVNNs into two real-valued networks, which make the achieved matrix dimension will be twice as large and increase the computational complexity [27,28]. In view of these points, it is urgent to further consider the dynamic behaviors of CVNNs in complex domain.
In this article, the robust dissipativity/passivity criteria will be established for system (2.1) by utilizing the mode-dependent Lyapunov-Krasovskii functional. Before stating the main results, we present some useful definitions and lemmas.
For CVNN (2.1), set energy input-output function \mathcal {H} as
\begin{align} \mathcal {H}(\hbar,y,t)\triangleq2\langle y,N\hbar\rangle_{t}+\langle \hbar,W\hbar\rangle_{t}+\langle y,My\rangle_{t},\; \; \; \forall t\geq0, \end{align} | (2.7) |
in which N is a real matrix, M and W are Hermitian matrices, and \langle y, N\hbar\rangle_{t} stands for \int_0^t y^H(s)N\hbar(s)\mathrm{d}s .
Definition 2.1. when the initial constraint is zero, if there owns a scalar \gamma > 0 that makes the inequality below
\begin{align*} \mathbb{E}\{\mathcal {H}(\hbar,y,t)\}\geq \gamma\langle \hbar, \hbar\rangle_t,\; \; \; \forall t\geq0. \end{align*} |
hold, system (2.1) is strictly (M, N, W) -dissipative in the sense of expectation.
Definition 2.2. when the initial constraint is zero, if there owns a scalar \gamma > 0 that makes the inequality below
\begin{align*} 2\mathbb{E}\bigg\{ \int_0^ty^T(s)\hbar(s)\mathrm{d}s\bigg\}\geq-\gamma \int_0^t\hbar^T(s)\hbar(s)\mathrm{d}s,\; \; \; \forall t\geq0. \end{align*} |
hold, from the input \hbar(\cdot) to the output y(\cdot) , system (2.1) is robustly passive in the sense of expectation.
Definition 2.3. [29,30] Consider a n -dimensional stochastic Markovian switching complex-valued differential equation:
\begin{equation*} \mathrm{d}\phi(t) = {\bf{F}}(t,\phi(t),\phi(t-\tau(t)),s(t))\mathrm{d}t+{\bf{G}}(t,\phi(t),\phi(t-\tau(t)),s(t))\mathrm{d}\mu(t),\; \; \; t\geq0 \end{equation*} |
where \phi(t) = (\phi_{1}(t), \phi_{2}(t), \ldots, \phi_{n}(t))^{\mathrm{T}}\in\mathbb{C}^{n} , {\bf{F}}, {\bf{G}} are general continuous functions. Calculate the \mathbb{R} -derivative of \Psi [31] as
\frac{\partial \Psi(t,\phi,a)}{\partial \phi}\Bigg|_{\bar{\phi} = \mathrm{const}}\triangleq\left(\frac{\partial \Psi(t,\phi,a)}{\partial \phi_{1}},\frac{\partial \Psi(t,\phi,a)}{\partial \phi_{2}},\ldots,\frac{\partial \Psi(t,\phi,a)}{\partial \phi_{n}}\right)\Bigg|_{\bar{\phi} = \mathrm{const}}, |
and the conjugate \mathbb{R} derivative of \Psi [31] as
\frac{\partial \Psi(t,\phi,a)}{\partial \bar{\phi}}\Bigg|_{\phi = \mathrm{const}}\triangleq\left(\frac{\partial \Psi(t,\phi,a)}{\partial \bar{\phi}_{1}},\frac{\partial \Psi(t,\phi,a)}{\partial \bar{\phi}_{2}},\ldots,\frac{\partial \Psi(t,\phi,a)}{\partial \bar{\phi}_{n}}\right)\Bigg|_{\phi = \mathrm{const}} |
in which the conjugate vector of \phi is \bar{\phi} . All functions \Psi(t, \phi, a):\mathbb{R}_{+}\times\mathbb{C}^{n}\times\mathcal{S}\rightarrow \mathbb{R}_{+} are {\bf{C}}^{1, 2}(\mathbb{R}_{+}\times\mathbb{C}^{n}\times\mathcal{S}, \mathbb{R}_{+}) , which means twice differentiable in \phi and \bar{\phi} and once continuously differentiable in t . Then for all \Psi(t, \phi, a) , the complex version of the generalized It \hat{o} 's formula could be given as the form below:
\begin{align} \begin{split} &\mathrm{d}\Psi(t,\phi,a)\\ = &\sum\limits_{b = 1}^{N}\varpi_{ab}\Psi(t,\phi,b)\mathrm{d}t +\frac{\partial \Psi(t,\phi,a)}{\partial t}\mathrm{d}t+\frac{\partial \Psi(t,\phi,a)}{\partial \phi}\mathrm{d}\phi\\ &+\frac{1}{2}\sum\limits_{p,q = 1}^{n}\frac{\partial^{2}\Psi(t,\phi,a)}{\partial \phi_{p}\partial \phi_{q}}\mathrm{d}\phi_{p}\mathrm{d}\phi_{q}\\ &+\frac{\partial \Psi(t,\phi,a)}{\partial \bar{\phi}}\mathrm{d}\bar{\phi}+\sum\limits_{p,q = 1}^{n}\frac{\partial^{2}\Psi(t,\phi,a)}{\partial \phi_{p}\partial \bar{\phi}_{q}}\mathrm{d}\phi_{p}\mathrm{d}\bar{\phi}_{q}\\ &+\frac{1}{2}\sum\limits_{p,q = 1}^{n}\frac{\partial^{2}\Psi(t,\phi,a)}{\partial \bar{\phi}_{p}\partial \bar{\phi}_{q}}\mathrm{d}\bar{\phi}_{p}\mathrm{d}\bar{\phi}_{q}\\ = &\Bigg[\sum\limits_{b = 1}^{N}\varpi_{ab}\Psi(t,\phi,b)+\frac{\partial \Psi(t,\phi,a)}{\partial t}+\frac{\partial \Psi(t,\phi,a)}{\partial \phi}{\bf{F}}(t,a)\\ &+{\bf{G}}^{\mathrm{T}}(t,a)\frac{\partial^{2}\Psi(t,\phi,a)}{\partial \phi\partial \bar{\phi}}\bar{{\bf{G}}}(t,a) +\frac{\partial \Psi(t,\phi,a)}{\partial \bar{\phi}}\bar{{\bf{F}}}(t,a)\\ &+\frac{1}{2}{\bf{G}}^{\mathrm{T}}(t,a)\frac{\partial^{2}\Psi(t,\phi,a)}{\partial \phi^{2}}{\bf{G}}(t,a)\\ &+\frac{1}{2}\bar{{\bf{G}}}^{\mathrm{T}}(t,a)\frac{\partial^{2}\Psi(t,\phi,a)}{\partial \bar{\phi}\partial \bar{\phi}}\bar{{\bf{G}}}(t,a)\Bigg]\mathrm{d}t\\ &+\Bigg[\frac{\partial \Psi(t,\phi,a)}{\partial \phi}{\bf{G}}(t,a)+\frac{\partial \Psi(t,\phi,a)}{\partial \bar{\phi}}\bar{{\bf{G}}}(t,a)\Bigg]\mathrm{d}\mu(t), \end{split} \end{align} | (2.8) |
where {\bf{F}}(t, a) denotes {\bf{F}}(t, \phi(t), \phi(t-\tau(t)), a) and {\bf{G}}(t, a) denotes {\bf{G}}(t, \phi(t), \phi(t-\tau(t)), a) for simplicity,
\begin{align*} &\frac{\partial^{2}\Psi(t,\phi,a)}{\partial \phi^{2}}\triangleq\left(\frac{\partial^{2}\Psi(t,\phi,a)}{\partial \phi_{p}\partial \phi_{q}}\right)_{n\times n},\; \; \; \frac{\partial^{2}\Psi(t,\phi,a)}{\partial \bar{\phi}\partial \bar{\phi}}\triangleq\left(\frac{\partial^{2}\Psi(t,\phi,a)}{\partial \bar{\phi}_{p}\partial \bar{\phi}_{q}}\right)_{n\times n},\\ &\frac{\partial^{2}\Psi(t,\phi,a)}{\partial \phi\partial \bar{\phi}}\triangleq\left(\frac{\partial^{2}\Psi(t,\phi,a)}{\partial \phi_{p}\partial \bar{\phi}_{q}}\right)_{n\times n}. \end{align*} |
In addition, the operator \mathcal{L} on \Psi(t, \phi, a) is defined as
\begin{align} \begin{split} \mathcal{L}\Psi(t,\phi,a) \triangleq&\sum\limits_{b = 1}^{N}\varpi_{ab}\Psi(t,\phi,b)+\frac{\partial \Psi(t,\phi,a)}{\partial t}+\frac{1}{2}{\bf{G}}^{\mathrm{T}}(t,a)\frac{\partial^{2}\Psi(t,\phi,a)}{\partial \phi^{2}}{\bf{G}}(t,a)\\ &+{\bf{G}}^{\mathrm{T}}(t,i)\frac{\partial^{2}\Psi(t,\phi,a)}{\partial \phi\partial \bar{\phi}}\bar{{\bf{G}}}(t,a)+\frac{1}{2}\bar{{\bf{G}}}^{\mathrm{T}}(t,a)\frac{\partial^{2}\Psi(t,\phi,a)}{\partial \bar{\phi}\partial \bar{\phi}}\bar{{\bf{G}}}(t,a),\\ &+\frac{\partial \Psi(t,\phi,a)}{\partial \bar{\phi}}\bar{{\bf{F}}}(t,i)+\frac{\partial \Psi(t,\phi,a)}{\partial \phi}{\bf{F}}(t,a). \end{split} \end{align} | (2.9) |
Lemma 2.1. [32] For a positive definite Hermitian matrix L\in\mathbb{C}^{n\times n} , give an integrable function \Theta(\cdot):[k, c]\rightarrow \mathbb{C}^n , where scalar k < c , then the following inequality holds:
\begin{align*} -\int_k^c\Theta^{H}(e)L\Theta(e)\mathrm{d}e\leq-\frac{1}{(c-k)}\bigg[\int_k^c\Theta(e)\mathrm{d}e\bigg]^{H} L\bigg[\int_k^c\Theta(e)\mathrm{d}e\bigg]. \end{align*} |
Lemma 2.2. [33] For vectors \vartheta , \chi\in\mathbb{C}^n , any matrix \Omega\in\mathbb{R}^{n\times n} satisfying \Omega^T\Omega\leq {\bf{I}} . There exists a scalar \xi > 0 , the presented inequality below is valid.
\begin{align*} \vartheta^{H}\Omega^T \chi+\chi^{H}\Omega \vartheta\leq \xi^{-1}\vartheta^{H}\vartheta+\xi \chi^{H}\chi. \end{align*} |
Lemma 2.3. [34] A given Hermitian matrix
\begin{align*} \Xi = \begin{pmatrix} \Xi_{11}&\Xi_{12}\\ \Xi_{21}&\Xi_{22} \end{pmatrix} < 0, \end{align*} |
where \Xi^{H}_{11} = \Xi_{11} , \Xi^{H}_{12} = \Xi_{21} , and \Xi^{H}_{22} = \Xi_{22} , is equivalent to any one inequality below.
1) \Xi_{22} < 0 and \Xi_{11}-\Xi_{12}\Xi_{22}^{-1}\Xi_{21} < 0 ,
2) \Xi_{11} < 0 and \Xi_{22}-\Xi_{21}\Xi_{11}^{-1}\Xi_{12} < 0 .
This section is concerned on the robust dissipativity and passivity issues of system (2.1). In the first place, the dissipativity criteria are derived in the following Theorem 3.1. The general situation of dissipativity criteria on passivity are subsequently presented as Theorem 3.2.
Theorem 3.1. Under Assumptions 2.1 and 2.2, from the input \hbar(t) , network (2.1) isstrictly (M, N, W) -dissipative in the sense of expectation if there exist positive definite Hermitian matrices P_a , Q_\iota\; (\iota = 1, 2, 3) , R_\kappa\; (\kappa = 1, 2) and S_\kappa , Hermitian matrix U_a with appropriate dimensions, diagonal matrices \Lambda_\varsigma > 0\; (\varsigma = 1, 2, 3, 4, 5, 6) , scalars \vartheta > 0 , \lambda > 0 , \delta > 0 and \nu_\varsigma > 0 such that the achieved matrix inequalities below uniformly are valid:
\begin{align} &P_a\leq \vartheta {\bf{I}}, \; \; \end{align} | (3.1) |
\begin{align} &\begin{bmatrix} \Theta_{11}&\Theta_{12}&\Theta_{13}\\ *&\Theta_{22}&\Theta_{23}\\ *&*&\Theta_{33} \end{bmatrix} < 0,\; \end{align} | (3.2) |
\begin{align} &P_b-U_a\leq 0,\; \; \; b\in \mathcal {S}_{uk}^{a}\backslash \{a\} , \end{align} | (3.3) |
\begin{align} &P_b-U_a\geq 0,\; \; \; b\in \mathcal {S}_{uk}^{a}\cap \{a\}\; , \end{align} | (3.4) |
where
\begin{align*} &\Theta_{23} = \begin{bmatrix} 0&0&-N&0&\eta \delta H_{2a}^T\\ 0&0&0&0&0\\ 0&0&0&0&\eta \delta H_{3a}^T\\ 0&0&0&0&0\\ 0&0&0&0&(1-\eta) \delta H_{3a}^T \end{bmatrix},\\ &\Theta_{11} = \mathrm{diag}\{\Omega_{11},\Omega_{22},\Omega_{33},\Omega_{44},\Omega_{55},\Omega_{66}\},\\ &\Theta_{22} = \mathrm{diag}\{-\nu_1\Lambda_1-M,-\nu_2\Lambda_2,-\nu_3\Lambda_3,-\nu_4\Lambda_4,-\nu_5\Lambda_5,-\nu_6\Lambda_6\},\\ &\Theta_{33} = \mathrm{diag}\Big\{-\frac{1}{\tilde{\tau}-\tau_1}R_2,-\frac{1}{\tau_2-\tilde{\tau}}S_2,\gamma{\bf{I}}-W,-\delta{\bf{I}},-\delta{\bf{I}}\Big\},\\ &\Theta_{12} = [\beta_1\; 0\; 0\; 0\; 0\; 0]^T,\; \; \Theta_{13} = [\beta_2\; 0\; 0\; 0\; 0\; 0]^T, \end{align*} |
with \beta_1^T = [P_aA_a\; 0\; \eta P_aB_a\; 0\; (1-\eta)P_aB_a\; 0] , \beta_2^T = [0\; 0\; P_a\; P_aD_a\; \delta H_{1a}^T] , and
\begin{align*} &\Omega_{11} = \sum\limits_{b\in\mathcal {S}_{k}^a}\varpi_{ab}(P_b-U_a)-P_aC_a-C_aP_a+Q_1+Q_2+Q_3+R_1+(\tilde{\tau}-\tau_1)R_2+S_1\\ &\quad\quad\; \; +(\tau_2-\tilde{\tau})S_2+\eta\vartheta V_1+(1-\eta)\vartheta V_3+\nu_1\Gamma_1\Lambda_1,\\ &\Omega_{22} = -Q_2+\nu_2 \Gamma_2 \Lambda_2,\; \; \Omega_{44} = -Q_1+\nu_4 \Gamma_2 \Lambda_4,\; \; \Omega_{66} = -Q_3+\nu_6 \Gamma_2 \Lambda_6,\\ &\Omega_{33} = -(1-\mu_1)R_1+\eta \vartheta V_2+\nu_3 \Gamma_2\Lambda_3,\\ &\Omega_{55} = -(1-\mu_2)S_1+(1-\eta) \vartheta V_4+\nu_5 \Gamma_2\Lambda_5,\\ &\Gamma_1 = \mathrm{diag}\{\sigma_1^2,\sigma_2^2,\ldots,\sigma_n^2\},\; \; \Gamma_2 = \mathrm{diag}\{\rho_1^2,\rho_2^2,\ldots,\rho_n^2\}. \end{align*} |
Proof. Choose a Lyapunov-Krasovskii functional below for network (2.1) as
\begin{align} \aleph(t) = \aleph_1(t)+\aleph_2(t)+\aleph_3(t)+\aleph_4(t), \end{align} | (3.5) |
where
\begin{align*} \aleph_1(t) = &x^H(t)P(s(t))x(t),\\ \aleph_2(t) = &\int_{t-\tilde{\tau}}^{t}x^H(s)Q_1x(s)\mathrm{d}s+\int_{t-\tau_1}^{t}x^H(s)Q_2x(s)\mathrm{d}s+\int_{t-\tau_2}^{t}x^H(s)Q_3x(s)\mathrm{d}s,\\ \aleph_3(t) = &\int_{t-\tau_1(t)}^{t}x^H(s)R_1x(s)\mathrm{d}s+ \int_{-\tilde{\tau}}^{-\tau_1}\int_{t+\theta}^tx^{H}(s)R_2x(s)\mathrm{d}s\mathrm{d}\theta,\\ \aleph_4(t) = &\int_{t-\tau_2(t)}^{t}x^H(s)S_1x(s)\mathrm{d}s+ \int_{-\tau_2}^{-\tilde{\tau}}\int_{t+\theta}^tx^{H}(s)S_2x(s)\mathrm{d}s\mathrm{d}\theta, \end{align*} |
where P(s(t)) , Q_1 , Q_2 , Q_3 , R_1 , R_2 , S_1 and S_2 are matrices, which are going to be determined. Along the trajectory of system (2.1), define infinitesimal generator \mathcal{L} , with the generalized complex It \hat{o} 's formula in Definition 2.3, it infers
\begin{align} \mathbb{E}\Big\{\mathcal{L}\aleph_1(t)\Big\} = &\mathbb{E}\Big\{x^H(t)P_a\big(-(C_a+\Delta C_a)x(t)\\ &+(A_a+\Delta A_a)f(x(t))+\eta(B_a+\Delta B_a)g(x(t-\tau_1 (t))) \\ &+(1-\eta)(B_a+\Delta B_a)g(x(t-\tau_2 (t)))\\ &+\hbar(t)\big)+\big(-(C_a+\Delta C_a)x(t)+\hbar(t)+(A_a \\&+\Delta A_a)f(x(t))+\eta(B_a+\Delta B_a)g(x(t-\tau_1 (t)))+(1-\eta)(B_a+\Delta B_a) \\&\times g(x(t-\tau_2 (t)))\big)^HP_ax(t)+\{\eta(t)h(t,x(t),x(t-\tau_1(t))) \\&+(1-\eta(t))h(t,x(t),x(t-\tau_2(t)))\}^HP_a\{\eta(t)h(t,x(t),x(t-\tau_1(t))) \\&+(1-\eta(t))h(t,x(t),x(t-\tau_2(t)))\}+\sum\limits_{b = 1}^N\varpi_{ab} x^H(t)P_bx(t)\Big\}, \end{align} | (3.6a) |
\begin{align} \mathbb{E}\Big\{\mathcal{L}\aleph_2(t)\Big\} = &\mathbb{E}\Big\{x^H(t)(Q_1+Q_2+Q_3)x(t)- \\ &x^H(t-\tilde{\tau})Q_1x(t-\tilde{\tau})-x^H(t-\tau_1)Q_2x(t-\tau_1) \\ &-x^H(t-\tau_2)Q_3x(t-\tau_2)\Big\}, \end{align} | (3.6b) |
\begin{align} \mathbb{E}\Big\{\mathcal{L}\aleph_3(t)\Big\} = &\mathbb{E}\bigg\{x^H(t)(R_1+(\tilde{\tau}-\tau_1)R_2)x(t)-(1-\dot{\tau_1}(t)) x^H(t-\tau_1(t))R_1x(t-\tau_1(t)) \\ &-\int_{t-\tilde{\tau}}^{t-\tau_1}x^H(s)R_2x(s)\mathrm{d}s\Big\} \\ \leq&\mathbb{E}\bigg\{x^H(t)(R_1+(\tilde{\tau}-\tau_1)R_2)x(t)-(1-\mu_1) x^H(t-\tau_1(t))R_1x(t-\tau_1(t)) \\ &-\int_{t-\tilde{\tau}}^{t-\tau_1}x^H(s)R_2x(s)\mathrm{d}s\Big\} , \end{align} | (3.6c) |
\begin{align} \mathbb{E}\Big\{\mathcal{L}\aleph_4(t)\Big\} = &\mathbb{E}\bigg\{x^H(t)(S_1+(\tau_2-\tilde{\tau})S_2)x(t)-(1-\dot{\tau_2}(t)) x^H(t-\tau_2(t))S_1x(t-\tau_2(t)) \\&-\int_{t-\tau_2}^{t-\tilde{\tau}}x^H(s)S_2x(s)\mathrm{d}s\Big\} \\ \leq&\mathbb{E}\bigg\{x^H(t)(S_1+(\tau_2-\tilde{\tau})S_2)x(t)-(1-\mu_2) x^H(t-\tau_2(t))S_1x(t-\tau_2(t))\\ &-\int_{t-\tau_2}^{t-\tilde{\tau}}x^H(s)S_2x(s)\mathrm{d}s\Big\} , \end{align} | (3.6d) |
here, to achieve (3.6), conditions \dot{\tau_1}(t)\leq\mu_1 and \dot{\tau_2}(t)\leq\mu_2 have been exploited.
According to the presented form of \eta(t) , it follows from Assumption 2.2 and condition (3.1) that
\begin{align} &\mathbb{E}\Big\{\{\eta(t)h(t,x(t),x(t-\tau_1(t)))+(1-\eta(t))h(t,x(t),x(t-\tau_2(t)))\}^HP_a\\ &\times\{\eta(t)h(t,x(t),x(t-\tau_1(t)))+(1-\eta(t))h(t,x(t),x(t-\tau_2(t)))\}\Big\}\\ &\leq \eta \vartheta h^H(t,x(t),x(t-\tau_1(t)))h(t,x(t),x(t-\tau_1(t)))\\ &+(1-\eta)\vartheta h^H(t,x(t),x(t-\tau_2(t)))h(t,x(t),x(t-\tau_2(t)))\\ &\leq x^H(t)(\eta\vartheta V_1+(1-\eta)\vartheta V_3)x(t)+x^H(t-\tau_1(t))(\eta\vartheta V_2)x(t-\tau_1(t))\\ &+x^H(t-\tau_2(t))((1-\eta)\vartheta V_4)x(t-\tau_1(t-\tau_2(t))). \end{align} | (3.7) |
Moreover, from Lemma 2.1, one has
\begin{align} &-\int_{t-\tilde{\tau}}^{t-\tau_1}x^H(s)R_2x(s)\mathrm{d}s \\&\leq-\frac{1}{\tilde{\tau}-\tau_1}\bigg(\int_{t-\tilde{\tau}}^{t-\tau_1}x^H(s)\mathrm{d}s\bigg)^HR_2\bigg(\int_{t-\tilde{\tau}}^{t-\tau_1}x^H(s)\mathrm{d}s\bigg) \end{align} | (3.8a) |
\begin{align} &-\int_{t-\tau_2}^{t-\tilde{\tau}}x^H(s)S_2x(s)\mathrm{d}s \\ &\leq-\frac{1}{\tau_2-\tilde{\tau}}\bigg(\int_{t-\tau_2}^{t-\tilde{\tau}}x^H(s)\mathrm{d}s\bigg)^HS_2\bigg(\int_{t-\tau_2}^{t-\tilde{\tau}}x^H(s)\mathrm{d}s\bigg) . \end{align} | (3.8b) |
In addition, for every \Lambda_\varsigma > 0\; (\varsigma = 1, 2, 3, 4, 5, 6) , which is real diagonal, it follows from Assumption 2.1 that
\begin{align} f^{T}(x(t))\Lambda_1f(x(t))&\leq x^H(t)\Gamma_1 \Lambda_1x(t),\; \; \end{align} | (3.9a) |
\begin{align} g^{T}(x(t-\tau_1))\Lambda_2g(x(t-\tau_1))&\leq x^H(t-\tau_1)\Gamma_2 \Lambda_2x(t-\tau_1),\; \; \end{align} | (3.9b) |
\begin{align} g^{T}(x(t-\tau_1(t)))\Lambda_3g(x(t-\tau_1(t)))&\leq x^H(t-\tau_1(t))\Gamma_2 \Lambda_3x(t-\tau_1(t)),\; \; \end{align} | (3.9c) |
\begin{align} g^{T}(x(t-\tilde{\tau}))\Lambda_4g(x(t-\tilde{\tau}))&\leq x^H(t-\tilde{\tau})\Gamma_2 \Lambda_4x(t-\tilde{\tau}),\; \; \end{align} | (3.9d) |
\begin{align} g^{T}(x(t-\tau_2(t)))\Lambda_5g(x(t-\tau_2(t)))&\leq x^H(t-\tau_2(t))\Gamma_2 \Lambda_5x(t-\tau_2(t)),\; \; \end{align} | (3.9e) |
\begin{align} g^{T}(x(t-\tau_2))\Lambda_6g(x(t-\tau_2))&\leq x^H(t-\tau_2)\Gamma_2 \Lambda_6x(t-\tau_2),\; \; \end{align} | (3.9f) |
where \Gamma_1\triangleq\mathrm{diag}\{\sigma_1^2, \sigma_2^2, \ldots, \sigma_n^2\} , \Gamma_2\triangleq\mathrm{diag}\{\rho_1^2, \rho_2^2, \ldots, \rho_n^2\} . Therefore, for any scalars \nu_\varsigma > 0\; (\varsigma = 1, 2, 3, 4, 5, 6) , we have
\begin{align} 0\leq&\nu_1[x^H(t)\Gamma_1 \Lambda_1x(t)-f^{T}(x(t))\Lambda_1f(x(t))], \end{align} | (3.10a) |
\begin{align} 0\leq&\nu_2[x^H(t-\tau_1)\Gamma_2 \Lambda_2x(t-\tau_1)-g^{T}(x(t-\tau_1))\Lambda_2g(x(t-\tau_1))], \end{align} | (3.10b) |
\begin{align} 0\leq&\nu_3[x^H(t-\tau_1(t))\Gamma_2 \Lambda_3x(t-\tau_1(t))-g^{T}(x(t-\tau_1(t)))\Lambda_3g(x(t-\tau_1(t)))], \end{align} | (3.10c) |
\begin{align} 0\leq&\nu_4[x^H(t-\tilde{\tau})\Gamma_2 \Lambda_4x(t-\tilde{\tau})-g^{T}(x(t-\tilde{\tau}))\Lambda_4g(x(t-\tilde{\tau}))], \end{align} | (3.10d) |
\begin{align} 0\leq&\nu_5[x^H(t-\tau_2(t))\Gamma_2 \Lambda_5x(t-\tau_2(t))-g^{T}(x(t-\tau_2(t)))\Lambda_5g(x(t-\tau_2(t)))], \end{align} | (3.10e) |
\begin{align} 0\leq&\nu_6[x^H(t-\tau_2)\Gamma_2 \Lambda_6x(t-\tau_2)-g^{T}(x(t-\tau_2))\Lambda_6g(x(t-\tau_2))]. \end{align} | (3.10f) |
According to the property of \Pi , it can be easily obtained that \sum_{b\in \mathcal {S}}\varpi_{ab} = 0 for all b\in \mathcal {S} . For every Hermitian matrix U_a , it infers
\begin{align} x^{H}(t)\Bigg(\sum\limits_{b\in\mathcal {S}_1^a}\varpi_{ab}+\sum\limits_{b\in\mathcal {S}_2^a}\varpi_{ab}\Bigg)U_au(t) = 0. \end{align} | (3.11) |
Combing (3.6a)–(3.11), one gets
\begin{align} &\mathbb{E}\Big\{\mathcal{L}\aleph(t)+\gamma \hbar^T(t)\hbar(t)-[y^T(t)My(t)+\hbar^T(t)W\hbar(t)+2y^T(t)N\hbar(t)]\Big\}\\ &\leq\mathbb{E}\bigg\{\xi^H(t)\Upsilon(t)\xi(t)+x^H(t)\sum\limits_{b\in\mathcal {S}_2^a}\varpi_{ab}(P_b-U_a)x(t)\bigg\}, \end{align} | (3.12) |
where \xi^H(t)\triangleq(x^H(t), x^H(t-\tau_1), x^H(t-\tau_1(t)), x^H(t-\tilde{\tau}), x^H(t-\tau_2(t)), x^H(t-\tau_2), f^T(x(t)), g^H(x(t-\tau_1)), g^H(x(t-\tau_1(t))), g^H(x(t-\tilde{\tau})), g^H(x(t-\tau_2(t))), g^H(x(t-\tau_2)), (\int_{t-\tilde{\tau}}^{t-\tau_1}x(s)\mathrm{d}s)^H, (\int_{t-\tau_2}^{t-\tilde{\tau}}x(s)\mathrm{d}s)^H, \hbar^T(t)) and
\begin{align*} \Upsilon(t)\triangleq \begin{bmatrix} \Upsilon_{11}(t)&\Upsilon_{12}(t)&\Upsilon_{13}\\ *&\Theta_{22}&\Upsilon_{23}\\ *&*&\Upsilon_{33} \end{bmatrix}, \end{align*} |
where
\begin{align*} \Upsilon_{11}(t) \begin{bmatrix} \Omega_{11}(t)&0&0&0&0&0\\ *&\Omega_{22}&0&0&0&0\\ *&*&\Omega_{33}&0&0&0\\ *&*&*&\Omega_{44}&0&0\\ *&*&*&*&\Omega_{55}&0\\ *&*&*&*&*&\Omega_{66}\\ \end{bmatrix},\; \; \; \; \Upsilon_{12}(t) \begin{bmatrix} \Omega_{17}(t)&0&\Omega_{19}(t)&0&\Omega_{1,11}(t)&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ \end{bmatrix}, \end{align*} |
in which \Omega_{11}(t) = \sum_{b\in\mathcal {S}_1^a}\varpi_{ab}(P_b-U_a)+P_a(-(C_a+\Delta C_a))+(-(C_a+\Delta C_a))P_a+Q_1+Q_2+Q_3+R_1+(\tilde{\tau}-\tau_1)R_2+S_1+(\tau_2-\tilde{\tau})S_2+\eta\vartheta V_1+(1-\eta)\vartheta V_3+\nu_1\Gamma_1\Lambda_1 , \Omega_{17}(t) = P_a(A_a+\Delta A_a) , \Omega_{19}(t) = \eta P_a(B_a+\Delta B_a) , \Omega_{1, 11}(t) = (1-\eta) P_a(B_a+\Delta B_a) , \Upsilon_{33} = \mathrm{diag}\{-1/(\tilde{\tau}-\tau_1)R_2, -1/(\tau_2-\tilde{\tau})S_2, \gamma{\bf{I}}-W\} , \Upsilon_{13} = [\beta_3\; 0\; 0\; 0\; 0\; 0]^T , \Upsilon_{23} = [\beta_4\; 0\; 0\; 0\; 0\; 0]^T with \beta_3^T = [0\; 0\; P_a] , \beta_4^T = [0\; 0\; -N] , and \Omega_{22} , \Omega_{33} , \Omega_{44} , \Omega_{55} , \Omega_{66} , \Theta_{22} are defined in (3.2).
Obviously, \Upsilon(t) = \Upsilon^1+\Delta \Upsilon^1(t) , in which
\begin{align*} \Upsilon^1\triangleq \begin{bmatrix} \vec{\Upsilon}_{11}&\vec{\Upsilon}_{12}&\Upsilon_{13}\\ *&\Theta_{22}&\Upsilon_{23}\\ *&*&\Upsilon_{33} \end{bmatrix},\; \; \; \; \Delta \Upsilon^1(t)\triangleq \begin{bmatrix} \Delta \vec{\Upsilon}_{11}& \Delta\vec{\Upsilon}_{12}&0\\ *&0&0\\ *&*&0 \end{bmatrix}, \end{align*} |
and \Upsilon^1 is equivalent to matrix \Upsilon(t) with only some entries are different, i.e., \Omega_{11}(t) , \Omega_{17}(t) , \Omega_{19}(t) , and \Omega_{1, 11}(t) are, respectively, replaced by \vec{\Omega}_{11} , \vec{\Omega}_{17} , \vec{\Omega}_{19} , and \vec{\Omega}_{1, 11} , in which \vec{\Omega}_{11} = \sum_{b\in\mathcal {S}_1^a}\varpi_{ab}(P_b-U_a)-P_aC_a-C_aP_a+Q_1+Q_2+Q_3+R_1+(\tilde{\tau}-\tau_1)R_2+S_1+(\tau_2-\tilde{\tau})S_2+\eta\vartheta V_1+(1-\eta)\vartheta V_3+\nu_1\Gamma_1\Lambda_1 , \vec{\Omega}_{17} = P_aA_a , \vec{\Omega}_{19} = \eta P_aB_a , \vec{\Omega}_{1, 11} = (1-\eta) P_aB_a .
Then, it follows from Lemma 2.2 that there must exist a scalar \delta > 0 satisfying
\begin{align} \Delta \Upsilon^1(t) = \mathfrak{M}\mathfrak{W}(t)\mathfrak{N}+\mathfrak{N}^H\mathfrak{W}^T(t)\mathfrak{M}^H\leq \delta^{-1}\mathfrak{M}\mathfrak{M}^H+\delta \mathfrak{N}^H\mathfrak{N}, \end{align} | (3.13) |
where \mathfrak{M}^{H} = [D_a^TP_a\; 0\; 0\; 0\; 0\; 0\; 0\; 0\; 0\; 0\; 0\; 0\; 0\; 0\; 0] , \mathfrak{N} = [-H_{1a}\; 0\; 0\; 0\; 0\; 0\; \eta H_{2a}\; 0\; \eta H_{3a}\; 0\; (1-\eta) H_{3a}\; 0\; 0\; 0\; 0] .
Considering the term x^H(t)\sum_{b\in\mathcal {S}_2^a}\varpi_{ab}(P_b-U_a)x(t) , we can divide that into the following two cases:
Case 1: b\in \mathcal{S}_{uk}^a\backslash \{a\} , it is known that \varpi_{ab}\geq 0 . Therefore, condition (3.3) infers \varpi_{ab}(P_b-U_a)\leq 0 ;
Case 2: b\in \mathcal{S}_{uk}^a\cap \{a\} , it is the fact that \varpi_{aa}\leq 0 . Therefore, condition (3.4) infers \varpi_{ab}(P_b-U_a)\leq0 .
According to the above discussions, one has
\begin{align} x^H(t)\sum\limits_{j\in\mathcal {S}_2^a}\varpi_{ab}(P_b-U_a)x(t) < 0,\; \; \forall a\in \mathcal {S}. \end{align} | (3.14) |
Therefore, it follows from Lemma 2.3, if conditions (3.2) and (3.14) hold, inequality \xi^H(t)\Upsilon(t)\xi(t)+x^H(t)\sum_{b\in\mathcal {S}_2^a}\varpi_{ab}(P_b-U_a)x(t) < 0 is valid for all a\in \mathcal {S} .
As a result, based on the above discussion, it is obvious to have
\begin{align} \mathbb{E}\Big\{\mathcal{L}\aleph(t)+\gamma \hbar^T(t)\hbar(t)\Big\}\leq\mathbb{E}\Big\{ y^T(t)My(t)+2y^T(t)N\hbar(t)+\hbar^TW\hbar(t)\Big\}, \end{align} | (3.15) |
which means that
\begin{align} \mathbb{E}\big\{\mathcal {H}(\hbar,y,t)\big\}\geq \mathbb{E}\big\{\gamma \langle \hbar,\hbar\rangle_t+\aleph(t)-\aleph(0)\big\}. \end{align} | (3.16) |
In addition, it is the fact that \aleph(0) = 0 and \aleph(t) > 0 , we can conclude \mathbb{E}\big\{\mathcal {H}(\hbar, y, t)\big\}\geq \gamma \langle \hbar, \hbar\rangle_t . It follows from Definition 2.1 that network (2.1) is strictly (M, N, W) -dissipative in the sense of expectation. This proof is completed.
Remark 3.1. The strictly (M, N, W) -dissipative criteria of the proposed Markovian switching CVNN (2.1) with partly unknown transition rates has been presented in Theorem 3.1. It should be emphasized that the obtained criteria are depend on the probability \eta of time-varying delay, which means the probabilistic time-varying delay has a great impact on the dissipativity/passivity performance of the considered system. Besides, to reflect the actual situation, this paper also involves other three factors, including Markovian switching, stochastic disturbance and uncertain parameters, which makes considered dissipativity analysis much more complex. It is worth noting that the considered transition rate information is partly unknown, which expands many existing literatures [35]. Meanwhile, the obtained theoretical results can be easily reduced to the existing literatures [36,37]. In addition, the involved factors can characterise the realistic system. Therefore, when discussing the dissipativity problem, it is necessary to consider them into the considered system. Dissipativity issue of CVNNs has been addressed in [38], in which only the stochastic disturbances are considered. Hence, in this article, the obtained dissipativity results could cover those in [38].
Remark 3.2. It is an apparent fact that the presented matrix inequalities (3.1)–(3.4) in Theorem 3.1 are all complex-valued, which cannot be directly solved via Matlab Toolbox. In view of this, we can utilize the method firstly proposed in [39] to solve a complex Hermitian matrix P satisfies P < 0 if and only if
\begin{align*} \begin{pmatrix} \mathrm{Re}(P)&\mathrm{Im}(P)\\ -\mathrm{Im}(P)&\mathrm{Re}(P) \end{pmatrix} < 0, \end{align*} |
where \mathrm{Re}(P) and \mathrm{Im}(P) refer to, respectively, the real and imaginary part of matrix P . In this case, the obtained complex-valued matrix inequalities can be transformed into real-valued matrix inequalities, which can be solved by adopting the standard Matlab Toolbox.
After acquiring the analysis in Theorem 3.1, set N = {\bf{I}} , M = 0 and W = 2\gamma{\bf{I}} , it can directly obtain the robust passivity criterion of system (2.1), which can be presented in the theorem below.
Theorem 3.2. With the help of Assumptions 2.1 and 2.2, network (2.1) is robustly passive in the expectation sense ifthere exist positive definite Hermitian matrices P_i , Q_\iota\; (\iota = 1, 2, 3) , R_\kappa\; (\kappa = 1, 2) and S_\kappa , Hermitian matrix U_a with appropriate dimensions, diagonal matrices \Lambda_\varsigma > 0\; (\varsigma = 1, 2, 3, 4, 5, 6) , constants \vartheta > 0 , \lambda > 0 , \delta > 0 and \nu_\varsigma > 0 such that matrix inequalities (3.1), (3.3) and (3.4) and the inequality below uniformly are valid:
\begin{align} \begin{bmatrix} \Theta_{11}&\Theta_{12}&\Theta_{13}\\ *&\tilde{\Theta}_{22}&\tilde{\Theta}_{23}\\ *&*&\tilde{\Theta}_{33} \end{bmatrix} < 0\; , \end{align} | (3.17) |
where
\begin{align*} &\tilde{\Theta}_{23} = \begin{bmatrix} 0&0&-{\bf{I}}&0&\eta \delta H_{2a}^T\\ 0&0&0&0&0\\ 0&0&0&0&\eta \delta H_{3a}^T\\ 0&0&0&0&0\\ 0&0&0&0&(1-\eta) \delta H_{3a}^T \end{bmatrix},\\ &\tilde{\Theta}_{22} = \mathrm{diag}\{-\nu_1\Lambda_1,-\nu_2\Lambda_2,-\nu_3\Lambda_3,-\nu_4\Lambda_4,-\nu_5\Lambda_5,-\nu_6\Lambda_6\},\\ &\tilde{\Theta}_{33} = \mathrm{diag}\Big\{-\frac{1}{\tilde{\tau}-\tau_1}R_2,-\frac{1}{\tau_2-\tilde{\tau}}S_2,-\gamma{\bf{I}},-\delta{\bf{I}},-\delta{\bf{I}}\Big\}, \end{align*} |
and the rest symbols can be found in Theorem 3.1.
Proof. Based on the similar derivation of Theorem 3.1, one has
\begin{align} 2\mathbb{E}\Big\{\int_0^t y^T(s)\hbar(s)\mathrm{d}s\Big\}\geq \mathbb{E} \Big\{-\gamma\int_0^t \hbar^T(s)\hbar(s)\mathrm{d}s+\aleph(t)-\aleph(0)\Big\}. \end{align} | (3.18) |
Moreover, owning to \aleph(0) = 0 and \aleph(t) > 0 , one directly gets 2\mathbb{E}\{ \int_0^ty^T(s)\hbar(s)\mathrm{d}s\}\geq-\gamma \int_0^t\hbar^T(s)\hbar(s)\mathrm{d}s . Combining with the Definition 2.2, one can obtain the considered system (2.1) is strictly robustly passive in the sense of expectation. This completes the proof.
Remark 3.3. What is noteworthy is that in most existing literatures on Markovian switching networks [40,41], the considered transition rate information is sometimes known, sometimes inaccessible, which leads to two common cases: \mathcal {S}_{uk}^a = \emptyset , \mathcal {S}_k^a = \mathcal{S} or \mathcal {S}_{k}^a = \emptyset , \mathcal {S}_{uk}^a = \mathcal{S} . Therefore, it takes great limitations or restrictions to practical applications. In reality, it is very difficult to be measure and require the transition rate information due to random factors. Hence, taking into partly unknown transition rates account, it is urgent to research Markovian switching systems and some relevant results have been reported [42,43]. Based on these considerations, analysis of this paper are more meaningful.
Remark 3.4. It should be pointed out that when dealing with the stochastic CVNNs, the considered system will be decomposed into real and imaginary parts, which means that the dimensions will be doubled and the computational complexity will increase [36,37]. Besides, the adopted method is the general real It \hat{o} 's formula. However, in this paper, compared to [36,37], the main advantages of the results are three parts. The first one is that replacing the real-imaginary separation technique, we discuss the system performance in the complex domain; the second one is that in virtue of the generalised It \hat{o} 's formula in the complex domain and stochastic analysis method, mode-delay-dependnt criteria are obtained; the third one is that the considered transition rate information is partly unknown, which further reflect realistic significance.
When N = 1 , we can reduce system (2.1) to the stochastic CVNN with probabilistic time-varying delay as follows:
\begin{align} &\mathrm{d}x (t) = \big[-\big(C+\Delta C(t)\big)x(t)+\big(A+\Delta A(t)\big)f(x(t))+\big(B+\Delta B(t)\big)g(x(t-\tau(t)))\\ &\quad\quad\quad\; +\hbar(t)\Big]\mathrm{d}t+h(t,x(t),x(t-\tau(t)))\mathrm{d}\omega(t),\\ &y(t) = f(x(t)), \qquad\qquad\qquad\qquad t\geq0. \end{align} | (3.19) |
In addition, it assumes that the parameter uncertainties satisfy
\begin{align} [\Delta C(t)\; \Delta A(t)\; \Delta B(t)] = D\mathfrak{W}(t)[H_{1}\; H_{2}\; H_{3}], \end{align} | (3.20) |
in which real matrices D and H_{1} are foreknown, and complex matrices H_{2} and H_{3} are also foreknown. \mathfrak{W}(t) is satisfied with inequality constraint (2.4). From Theorems 3.1 and 3.2, the following criteria will be accessible readily.
Corollary 3.1. Under Assumptions 2.1 and 2.2, from the input \hbar(t) , network (3.19) isstrictly (M, N, W) -dissipative in the sense of expectation if there exist positive definite Hermitian matrices P , Q_\iota\; (\iota = 1, 2, 3) , R_\kappa\; (\kappa = 1, 2) and S_\kappa , diagonal matrices \Lambda_\varsigma > 0\; (\varsigma = 1, 2, 3, 4, 5, 6) , constants \vartheta > 0 , \lambda > 0 , \delta > 0 and \nu_\varsigma > 0 , the complex matrix inequalities below uniformly are valid:
\begin{align} &P\leq \vartheta {\bf{I}}, \; \; \end{align} | (3.21) |
\begin{align} &\begin{bmatrix} \overleftarrow{\Theta}_{11}&\overleftarrow{\Theta}_{12}&\overleftarrow{\Theta}_{13}\\ *&\Theta_{22}&\overleftarrow{\Theta}_{23}\\ *&*&\Theta_{33} \end{bmatrix} < 0\; , \end{align} | (3.22) |
where
\begin{align*} &\overleftarrow{\Theta}_{23} = \begin{bmatrix} 0&0&-N&0&\eta \delta H_{2}^T\\ 0&0&0&0&0\\ 0&0&0&0&\eta \delta H_{3}^T\\ 0&0&0&0&0\\ 0&0&0&0&(1-\eta) \delta H_{3}^T \end{bmatrix},\\ &\overleftarrow{\Theta}_{11} = \mathrm{diag}\{\overleftarrow{\Omega}_{11},\Omega_{22},\Omega_{33},\Omega_{44},\Omega_{55},\Omega_{66}\},\\ &\overleftarrow{\Theta}_{12} = [\tilde{\beta}_1\; 0\; 0\; 0\; 0\; 0]^T,\; \; \overleftarrow{\Theta}_{13} = [\tilde{\beta}_2\; 0\; 0\; 0\; 0\; 0]^T, \end{align*} |
with \tilde{\beta}_1^T = [PA\; 0\; \eta PB\; 0\; (1-\eta)PB\; 0] , \tilde{\beta}_2^T = [0\; 0\; P\; PD\; \delta H_{1}^T] , and
\begin{align*} &\overleftarrow{\Omega}_{11} = -PC-CP+Q_1+Q_2+Q_3+R_1+(\tilde{\tau}-\tau_1)R_2+S_1+(\tau_2-\tilde{\tau})S_2\\ &\quad\quad\; \; +\eta\vartheta V_1+(1-\eta)\vartheta V_3+\nu_1\Gamma_1\Lambda_1. \end{align*} |
in which other symbols are taken as the ones in Theorem 3.1.
Corollary 3.2. Under Assumptions 2.1 and 2.2, if there exist positive definite Hermitian matrices P , Q_\iota\; (\iota = 1, 2, 3) , R_\kappa\; (\kappa = 1, 2) and S_\kappa , diagonal matrices \Lambda_\varsigma > 0\; (\varsigma = 1, 2, 3, 4, 5, 6) , constants \vartheta > 0 , \lambda > 0 , \delta > 0 and \nu_\varsigma > 0 such that inequality (3.20) and the inequality below are valid:
\begin{align} \begin{bmatrix} \overleftarrow{\Theta}_{11}&\overleftarrow{\Theta}_{12}&\overleftarrow{\Theta}_{13}\\ *&\tilde{\Theta}_{22}&\overleftarrow{\Theta}_{23}\\ *&*&\tilde{\Theta}_{33} \end{bmatrix} < 0\; , \end{align} | (3.23) |
in which the rest symbols can be found in Theorem 3.2 and Corollary 3.1, it reveals network (3.19) is robustly passive in the expectation sense.
Remark 3.5. In Theorems 3.1 and 3.2, Corollaries 1 and 2, sufficient delay-dependent dissipativity/passivity criteria are derived for the stochastic CVNN model with probabilistic time-varying delay. From complex matrix inequalities (3.2), (3.17), (3.22) and (3.23), they can infer that \tau_1(t) and \tau_2(t) need to match \dot{\tau}_1(t)\leq \mu_1 < 1 and \dot{\tau}_2(t)\leq \mu_2 < 1 . In other words, these matrix inequalities mentioned above do not have feasible solutions with \mu_1 > 1 or \mu_2 > 1 . Such conservatism is mainly due to the existence of the stochastic disturbances, which has a significant effect on the construction of the Lyapunov functional leading to the methodological limitations.
This section provides two examples to show the effectiveness and validity of the obtained results.
Example 4.1. Consider a three-mode Markovian switching CVNN (2.1), in which C_1 = \mathrm{diag}\{5.3, 4.2\} , C_2 = \mathrm{diag}\{4.5, 4.8\} , C_3 = \mathrm{diag}\{6.1, 5.9\} , the other parametric coefficients are taken as
\begin{align*} &A_1 = \begin{bmatrix} -0.3 + 1.5\vec{i} & -0.7-0.4\vec{i}\\ 1.2 + 0.7\vec{i} & -0.4 + 0.5\vec{i} \end{bmatrix},\; \; A_2 = \begin{bmatrix} 1.2 + 0.8\vec{i} & -1.2 + 0.7\vec{i} \\ 0.9 - 0.6\vec{i} & -0.5 - 0.8\vec{i} \end{bmatrix},\\ &A_3 = \begin{bmatrix} 1.3 + 0.9\vec{i} &1.2 - 1.3\vec{i} \\ 1.2 - 0.8\vec{i} & -0.8 - 0.6\vec{i} \end{bmatrix},\; \; \; \; B_1 = \begin{bmatrix} 1.2 - 0.7\vec{i} & 0.9 + 1.5\vec{i}\\ 0.7+0.8\vec{i} & -0.8+0.7\vec{i} \end{bmatrix},\\ & B_2 = \begin{bmatrix} -1.3 - 0.8\vec{i} & 0.8 + 1.1\vec{i} \\ 0.6 - 0.8\vec{i} & 0.6 - 0.8\vec{i} \end{bmatrix},\; \; \; \; B_3 = \begin{bmatrix} -0.6 + 1.2\vec{i} &0.6 - 0.5\vec{i} \\ 0.8 - 0.8\vec{i} & 0.9 - 1.1\vec{i} \end{bmatrix}, \end{align*} |
and for \iota = 1, 2 , the activation function f(x(t)) is choose as f_\iota(x_\iota(t)) = (1/4)(|x_\iota+1|-|x_\iota-1|) . The activation function g(x(t)) is choose as g_\iota(x_\iota(t)) = (1/2)(|x_\iota+1|-|x_\iota-1|) . Moreover, the noise intensity function is choose as h(t, x(t), x(t-\tau_1(t))) = [0.5\sin(v_1(t)), 0.4\sin(0.01v_2(t-\tau_1(t)))]^T+ \vec{i}[0.5\sin(w_1(t)), 0.4\sin(0.01w_2(t-\tau_1(t)))]^T and h(t, x(t), x(t-\tau_2(t))) = [0.5\sin(v_1(t)), 0.3\sin(0.01v_2(t-\tau_2(t)))]^T+ \vec{i}[0.5\sin(w_1(t)), 0.3\sin(0.01w_2(t-\tau_2(t)))]^T , where x(t) = (x_1(t), x_2(t))^H with x_1(t) = v_1(t)+\vec{i}w_1(t) and x_2(t) = v_2(t)+\vec{i}w_2(t) . Therefore, it is easy to obtain that
\begin{align*} &\Gamma_1 = \begin{bmatrix} 0.25 & 0\\ 0 & 0.25 \end{bmatrix},\; \; \; \; \; \; \Gamma_2 = \begin{bmatrix} 1 & 0\\ 0 & 1 \end{bmatrix},\; \; \; \; \; \; V_2 = \begin{bmatrix} 0.32 & 0\\ 0 & 0.32 \end{bmatrix},\\ & V_1 = V_3 = \begin{bmatrix} 0.25 & 0\\ 0 & 0.25 \end{bmatrix},\; \; \; \; \; \; \; \; V_4 = \begin{bmatrix} 0.18 & 0\\ 0 & 0.18 \end{bmatrix}. \end{align*} |
Moreover, the value of Markovian chain s(t) , obeying the exponential distribution with s(0) = 1 (as shown in Figure 1), is taken in \mathcal {S} = \{1, 2, 3\} . The presented transition rate matrix with partly unknown elements is given as
\begin{align*} \Pi = \begin{bmatrix} -1.8 & ?&? \\ ? & ?&0.9\\ ? & ?&-1.2 \end{bmatrix}. \end{align*} |
In addition, \tau_1(t) = 0.4 + 0.2 \sin(t) and \tau_2(t) = 1 + 0.4 \cos(t) , it can be calculated that \tau_1 = 0.2 , \tau_2 = 1.4 , \tilde{\tau} = 1 , \mu_1 = 0.2 , \mu_2 = 0.4 , respectively. The dissipation parameters are taken as follows:
\begin{align*} M = \begin{bmatrix} -5.2 &0.4+0.3\vec{i} \\ 0.4-0.3\vec{i} &-6.2 \end{bmatrix},\; \; \; W = \begin{bmatrix} 66.0& 0.5+0.4\vec{i} \\ 0.5-0.4\vec{i}& 68.0 \end{bmatrix},\; \; \; N = \begin{bmatrix} -0.2 & 0.5 \\ 0.3 & 0.4 \end{bmatrix}. \end{align*} |
Furthermore, under constrains (2.3) and (2.4), take system parameters as
\begin{align*} &D_1 = \begin{bmatrix} 0.1 &0.1\\ 0.2 &0.3 \end{bmatrix},\; \; \; \; H_{21} = \begin{bmatrix} 0.2 + 0.3\vec{i}& 0.1 + 0.2\vec{i}\\ -0.3 - 0.2\vec{i} &-0.4 + 0.6\vec{i} \end{bmatrix},\; \; H_{23} = \begin{bmatrix} 0.4 + 0.6\vec{i}& -0.3 + 0.2\vec{i}\\ 0.3 + 0.4\vec{i} & 0.4 - 0.4\vec{i} \end{bmatrix},\\ &D_2 = \begin{bmatrix} 0.2& 0.1\\ 0.2& 0.1 \end{bmatrix},\; \; \; \; H_{22} = \begin{bmatrix} 0.2 + 0.3\vec{i}& 0.1 + 0.2\vec{i}\\ -0.3 - 0.2\vec{i} &-0.4 + 0.6\vec{i} \end{bmatrix},\; \; H_{31} = \begin{bmatrix} 0.4 - 0.2\vec{i}& 0.2 - 0.3\vec{i}\\ -0.2 - 0.4\vec{i} &0.3 - 0.5\vec{i} \end{bmatrix},\\ &D_3 = \begin{bmatrix} 0.3& 0.2\\ 0.1& 0.4 \end{bmatrix},\; \; \; \; H_{33} = \begin{bmatrix} 0.3 + 0.2\vec{i}&0.1 + 0.3\vec{i}\\ -0.1+ 0.1\vec{i}& 0.4 - 0.6\vec{i} \end{bmatrix},\; \; \; \; H_{32} = \begin{bmatrix} 0.5 - 0.4\vec{i}& -0.1 + 0.2\vec{i}\\ 0.2 + 0.5\vec{i}& 0.2 - 0.7\vec{i} \end{bmatrix},\\ &H_{11} = \begin{bmatrix} -0.1 &0.3\\ 0.2& 0.4 \end{bmatrix},\; \; \; \; H_{12} = \begin{bmatrix} 0.2& -0.5\\ -0.3& 0.3 \end{bmatrix},\; \; \; \; H_{13} = \begin{bmatrix} 0.4 &0.2\\ -0.4 &0.3 \end{bmatrix}. \end{align*} |
In addition, choose a 2\times2 matrix as \mathfrak{W}(t) , which is random and satisfies condition (2.4).
Set \eta = 0.9 , it follows from Theorem 3.1 that Inequalities (3.1)–(3.4) have feasible solutions, for space consideration, only part of them are given with \nu_{\varsigma} = 0.1 , \gamma = 2.0295 , \vartheta = 22.2611 , \delta = 5.0120 , \Lambda_1 = \mathrm{diag}\{ 481.9889,430.7401\} , \Lambda_2 = \mathrm{diag}\{ 15.2188, 17.1055 \} , and
\begin{align*} &P_{1} = \begin{bmatrix} 15.4202 & 1.8413 -1.6281\vec{i}\\ 1.8413 +1.6281\vec{i} & 19.6244 \end{bmatrix},\; \; Q_{1} = \begin{bmatrix} 3.1240 & -0.2667 +0.6116\vec{i}\\ -0.2667 -0.6116\vec{i} & 3.5508 \end{bmatrix},\\ &P_{2} = \begin{bmatrix} 20.1603 & 1.0637 -1.1014\vec{i}\\ 1.0637 +1.1014\vec{i} & 20.1634 \end{bmatrix},\; \; Q_{3} = \begin{bmatrix} 3.1240 & -0.2667 + 0.6116\vec{i}\\ -0.2667 - 0.6116\vec{i} & 3.5508 \end{bmatrix},\\ &U_{1} = \begin{bmatrix} 21.2975& 0.8031-0.5850\vec{i}\\ 0.8031+0.5850\vec{i}& 20.7861 \end{bmatrix},\; \; R_{2} = \begin{bmatrix} 2.1657 & -0.3288+0.7540\vec{i}\\ -0.3288-0.7540\vec{i} & 2.4761 \end{bmatrix},\\ &U_{3} = \begin{bmatrix} 25.7186& 1.2725-0.6614\vec{i}\\ 1.2725+0.6614\vec{i}&30.3314 \end{bmatrix},\; \; S_{2} = \begin{bmatrix} 3.3132 &-0.4009 +0.9188\vec{i}\\ -0.4009 -0.9188\vec{i}& 3.6930 \end{bmatrix}. \end{align*} |
From Theorem 3.1, in the sense of expectation, it can be immediately obtained that CVNN (2.1) is strictly (M, N, W) -dissipative.
For simulation aim, initial constraints are listes as four cases below. Specifically, Case 1: x_1(t) = 2.2-5.2\vec{i} , x_2(t) = -1.4+1.2\vec{i} for t\in [-1.4, 0] . Case 2: x_1(t) = 0.3-1.9\vec{i} , x_2(t) = -4.4+4.2\vec{i} for t\in [-1.4, 0] . Case 3: x_1(t) = -3.4+0.6\vec{i} , x_2(t) = 2.8-3.4\vec{i} for t\in [-1.4, 0] . Case 4: x_1(t) = -1.3+3.8\vec{i} , x_2(t) = 1.4-1.2\vec{i} for t\in [-1.4, 0] . Figure 2 shows the variation of the probabilistic time-varying delay \tau(t) . The Bernoulli sequence \eta(t) , representing the probability delay, is given in Figure 3. Figure 4 shows the time variations of state x(t) (real and imaginary parts) for the Markovian switching network (2.1) without input \hbar(t) perturbed by stochastic noises.Moreover. In Theorem 3.1, choose different \eta , Table 1 offers the maximum dissipativity performance \gamma , which reflects that the Brownian motion and the Markovian switching can lead to great influence on the dissipative performance index \gamma .
Value of \eta | 0 | 0.1 | 0.2 | 0.4 | 0.5 | 0.9 | 1.0 |
Theorem 3.1 | 0.4627 | 0.7307 | 0.7522 | 1.0139 | 0.8350 | 2.0295 | 2.1968 |
Corollary 3.1 | 1.0302 | 1.0302 | 1.3021 | 1.1357 | 1.2671 | 2.2713 | 2.6587 |
Remark 4.1. It should be emphasized that, when the considered probabilistic time-varying delay is reduced to the general time-varying delay and there is no Markovian switching, it can be verified that the achieved results in [38] have no feasible solutions while ours have, which means that the achieved criteria have are less conservative than those in [38]. On the other hand, in this paper, we take into account the Markovian switching with partly unknown transition rates and stochastic disturbance. Not only does this paper propose the more general dissipativity and passivity criteria than the existing relevant literatures [44,45], but also the obtained results demonstrate that the Markovian switching and Brownian motion have a great influence on the dissipative/passivity performance index \gamma .
Example 4.2. The considered CVNN (2.1) owns same parameters and Markov chain s(t) in Example 3.1. Set M = 0 , N = {\bf{I}} , W = 2\gamma{\bf{I}} and \eta = 0.7 . By resorting to Corollary 3.1, a set of feasible solutions of matrix inequalities (3.1), (3.3), (3.4) and (3.17) can be derived as \vartheta = 9.0356 , \gamma = 35.2150 , \nu_\varsigma = 0.1 , \delta = 2.0228 , \Lambda_{1} = \mathrm{diag}\{168.5290,145.5380\} , \Lambda_{2} = \mathrm{diag}\{7.1525, 8.0283\} , and
\begin{align*} &P_{1} = \begin{bmatrix} 6.2618 & 0.7267-0.6136\vec{i}\\ 0.7267+0.6136\vec{i} & 7.8991 \end{bmatrix},\; \; Q_{1} = \begin{bmatrix} 1.4694 & -0.1230+ 0.2636\vec{i}\\ -0.1230- 0.2636\vec{i} &1.6718 \end{bmatrix},\\ &P_{2} = \begin{bmatrix} 8.1543 & 0.4110 -0.4481\vec{i}\\ 0.4110 +0.4481\vec{i} & 8.1238 \end{bmatrix},\; \; Q_{2} = \begin{bmatrix} 1.4694 & -0.1230 +0.2636\vec{i}\\ -0.1230 -0.2636\vec{i} & 1.6718 \end{bmatrix},\\ &U_{1} = \begin{bmatrix} 8.6633& 0.2913 -0.2261\vec{i}\\ 0.2913 +0.2261\vec{i} & 8.4028 \end{bmatrix},\; \; R_{2} = \begin{bmatrix} 1.0172 & -0.1495+ 0.3202\vec{i}\\ -0.1495- 0.3202\vec{i} & 1.1647 \end{bmatrix},\\ &U_{3} = \begin{bmatrix} 10.6872&0.5077 -0.2558\vec{i}\\ 0.5077 +0.2558\vec{i} & 12.3308 \end{bmatrix},\; \; S_{2} = \begin{bmatrix} 1.5103 & -0.1688 +0.3615\vec{i}\\ -0.1688 -0.3615\vec{i} & 1.6774 \end{bmatrix}. \end{align*} |
According to the Corollary 3.1, in the sense of expectation, we can conclude that system (2.1) is called as robustly passive. Moreover, the maximum passivity performance \gamma for different probabilistic constant \eta is presented in Table 2. Table 2 reflects that the Brownian motion and the Markovian switching can lead to great effect on the dissipative performance index \gamma .
Value of \eta | 0 | 0.1 | 0.2 | 0.4 | 0.5 | 0.9 | 1.0 |
Theorem 3.2 | 105.3514 | 4393.0000 | 41.3763 | 19.9482 | 87.5144 | 35.2150 | 1306.7000 |
Corollary 3.2 | 13.4568 | 12.6961 | 9.7946 | 16.5451 | 14.0339 | 7.2976 | 28.9809 |
The robust dissipativity/passivity problem for stochastic Markovian switching CVNNs with probabilistic time-varying delay is probed in this work. The considered probabilistic delay is characterized by a series of random variables obeying bernoulli distribution. Moreover, the concerned parameter uncertainties are not only norm-bounded but also mode-dependent. For the aim of reflecting more realistic dynamics of the presented model, transition rate information is partly acquainted. Combined robust analysis tools, stochastic analysis methods with generalized complex It \hat{o} 's formula, some sufficient mode-delay-dependent criteria on the (M, N, W) -dissipativity/passivity have been derived by means of complex linear matrix inequalities. In the end of paper, two effective examples are presented to support and clarify the validity and correctness of our proposed research results.
This work was supported in part by the National Natural Science Foundation of China under Grant 61906084, in part by the High-level Talent Research Foundation of Anhui Agricultural University under Grant rc382106, in part by the Natural Science Foundation of Jiangsu Province of China under Grant BK20180815, and in part by the Natural Science Foundation of Anhui Province under Grant 1908085MG237.
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
[1] |
M. Kobayashi, Symmetric complex-valued Hopfield neural networks, IEEE T. Neur. Net. Lear., 28 (2017), 1011–1015. http://doi.org/10.1109/TNNLS.2016.2518672 doi: 10.1109/TNNLS.2016.2518672
![]() |
[2] |
D. L. Lee, Relaxation of the stability condition of the complex-valued neural networks, IEEE T. Neural Networ., 12 (2001), 1260–1262. http://doi.org/10.1109/72.950156 doi: 10.1109/72.950156
![]() |
[3] |
M. K. Muezzinoglu, C. Guzelis, J. M. Zurada, A new design method for the complex-valued multistate Hopfield associative memory, IEEE T. Neural Networ., 14 (2003), 891–899. http://doi.org/10.1109/TNN.2003.813844 doi: 10.1109/TNN.2003.813844
![]() |
[4] |
S. Berhanu, Liouville's theorem and the maximum modulus principle for a system of complex vector fields, Commun. Part. Diff. Eq., 19 (1994), 1805–1827. http://doi.org/10.1080/03605309408821074 doi: 10.1080/03605309408821074
![]() |
[5] |
T. Nitta, Solving the XOR problem and the detection of symmetry using a single complex-valued neuron, Neural Networks, 16 (2013), 1101–1105. http://doi.org/10.1016/S0893-6080(03)00168-0 doi: 10.1016/S0893-6080(03)00168-0
![]() |
[6] | A. Hirose, Recent progress in applications of complex-valued neural networks, In: International conference on artifical intelligence and soft computing, Berlin: Springer, 2010. |
[7] |
X. Liu, Z. Li, Finite time anti-synchronization of complex-valued neural networks with bounded asynchronous time-varying delays, Neurocomputing, 387 (2020), 129–138. http://doi.org/10.1016/j.neucom.2020.01.035 doi: 10.1016/j.neucom.2020.01.035
![]() |
[8] |
W. Gong, J. Liang, J. Cao, Matrix measure method for global exponential stability of complex-valued recurrent neural networks with time-varying delays, Neural Networks, 70 (2015), 81–89. http://doi.org/10.1016/j.neunet.2015.07.003 doi: 10.1016/j.neunet.2015.07.003
![]() |
[9] |
R. Samiduraia, R. Sriramana, S. Zhu, Leakage delay-dependent stability analysis for complex-valued neural networks with discrete and distributed time-varying delays, Neurocomputing, 338 (2019), 262–273. http://doi.org/10.1016/j.neucom.2019.02.027 doi: 10.1016/j.neucom.2019.02.027
![]() |
[10] |
X. Liu, T. Chen, Global exponential stability for complex-valued recurrent neural networks with asynchronous time delays, IEEE T. Neur. Net. Lear., 27 (2015), 593–606. http://doi.org/10.1109/TNNLS.2015.2415496 doi: 10.1109/TNNLS.2015.2415496
![]() |
[11] |
K. Guan, Global power-rate synchronization of chaotic neural networks with proportional delay via impulsive control, Neurocomputing, 283 (2018), 256–265. http://doi.org/10.1016/j.neucom.2018.01.027 doi: 10.1016/j.neucom.2018.01.027
![]() |
[12] |
J. C. Willems, Dissipative dynamical systems part Ⅰ: General theory, Arch. Rational Mech. Anal., 45 (1972), 321–351. http://doi.org/10.1007/bf00276493 doi: 10.1007/bf00276493
![]() |
[13] |
D. Hill, P. Moylan, The stability of nonlinear dissipative systems, IEEE T. Automat. Contr, 21 (1976), 708–711. http://doi.org/10.1109/TAC.1976.1101352 doi: 10.1109/TAC.1976.1101352
![]() |
[14] | V. Belevitch, Classical network theory, Holden Day, 1968. |
[15] |
S. Ding, Z. Wang, H. Zhang, Dissipativity analysis for stochastic memristive neural networks with time-varying delays: a discrete-time case, IEEE T. Neur. Net. Lear., 29 (2018), 618–630. http://doi.org/10.1109/TNNLS.2016.2631624 doi: 10.1109/TNNLS.2016.2631624
![]() |
[16] |
G. Nagamani, T. Radhika, Dissipativity and passivity analysis of T-S fuzzy neural networks with probabilistic time-varying delays: a quadratic convex combination approach, Nonlinear Dyn., 82 (2015), 1325–1341. http://doi.org/10.1007/s11071-015-2241-8 doi: 10.1007/s11071-015-2241-8
![]() |
[17] |
S. Ramasamy, G. Nagamani, Dissipativity and passivity analysis for discrete-time complex-valued neural networks with leakage delay and probabilistic time-varying delays, Int. J. Adapt. Control, 31 (2017), 876–902. http://doi.org/10.1002/acs.2736 doi: 10.1002/acs.2736
![]() |
[18] |
P. Balasubramaniam, G. Nagamani, S. Ramasamy, Robust dissipativity and passivity analysis for discrete-time stochastic neural networks with time-varying delay, Complexity, 21 (2016), 47–58. http://doi.org/10.1002/cplx.21614 doi: 10.1002/cplx.21614
![]() |
[19] |
Q. Li, J. Liang, W. Gong, State estimation for semi-Markovian switching CVNNs with quantization effects and linear fractional uncertainties, J. Franklin I., 358 (2021), 6326–6347. http://doi.org/10.1016/j.jfranklin.2021.05.035 doi: 10.1016/j.jfranklin.2021.05.035
![]() |
[20] |
Z. Yan, Y. Song, J. H. Park, Quantitative mean square exponential stability and stabilization of stochastic systems with Markovian switching, J. Franklin I., 355 (2018), 3438–3454. http://doi.org/10.1016/j.jfranklin.2018.02.026 doi: 10.1016/j.jfranklin.2018.02.026
![]() |
[21] |
S. Senthilraj, R. Raja, J. Cao, H. M. Fardoun, Dissipativity analysis of stochastic fuzzy neural networks with randomly occurring uncertainties using delay dividing approach, Nonlinear Anal. Model. Control, 24 (2019), 561–581. http://doi.org/10.15388/NA.2019.4.5 doi: 10.15388/NA.2019.4.5
![]() |
[22] | S. Sathananthan, I. Lyatuu, M. Knap, L. Keel, Robust passivity and synthesis of discrete-time stochastic systems with multiplicative noise under Markovian switching, Commun. Appl. Anal., 17 (2013), 451–469. |
[23] |
E. Tian, D. Yue, G. Wei, Robust control for Markovian jump systems with partially known transition probabilities and nonlinearities, J. Franklin I., 350 (2013), 2069–2083. http://doi.org/10.1016/j.jfranklin.2013.05.011 doi: 10.1016/j.jfranklin.2013.05.011
![]() |
[24] |
G. Zong, D. Yang, L. Hou, Q. Wang, Robust finite-time H_{\infty} control for Markovian jump systems with partially known transition probabilities, J. Franklin I., 350 (2013), 1562–1578. http://doi.org/10.1016/j.jfranklin.2013.04.003 doi: 10.1016/j.jfranklin.2013.04.003
![]() |
[25] |
R. Zhang, D. Zeng, X. Liu, S. Zhong, J. Cheng, New results on stability analysis for delayed Markovian generalized neural networks with partly unknown transition rates, IEEE T. Neur. Net. Lear., 30 (2019), 3384–3395. http://doi.org/10.1109/TNNLS.2019.2891552 doi: 10.1109/TNNLS.2019.2891552
![]() |
[26] |
Y. Liu, C. Zhang, Y. Kao, C. Hou, Exponential stability of Neutral-type impulsive Markovian jump neural networks with general incomplete transition rates, Neural Process. Lett., 47 (2018), 325–345. http://doi.org/10.1007/s11063-017-9650-2 doi: 10.1007/s11063-017-9650-2
![]() |
[27] |
J. Liang, W. Gong, T. Huang, Multistability of complex-valued neural networks with discontinuous activation functions, Neural Networks, 84 (2016), 125–142. http://doi.org/10.1016/j.neunet.2016.08.008 doi: 10.1016/j.neunet.2016.08.008
![]() |
[28] |
Q. Song, H. Yan, Z. Zhao, Y. Liu, Global exponential stability of complex-valued neural networks with both time-varying delays and impulsive effects, Neural Networks, 79 (2016), 108–116. http://doi.org/10.1016/j.neunet.2016.03.007 doi: 10.1016/j.neunet.2016.03.007
![]() |
[29] |
J. Ubøe, Complex valued multiparameter stochastic integrals, J. Theor. Probab., 8 (1995), 601–624. http://doi.org/10.1007/bf02218046 doi: 10.1007/bf02218046
![]() |
[30] |
P. Wang, Y. Hong, H. Su, Stabilization of stochastic complex-valued coupled delayed systems with Markovian switching via periodically intermittent control, Nonlinear Anal. Hybri., 29 (2018), 395–413. http://doi.org/10.1016/j.nahs.2018.03.006 doi: 10.1016/j.nahs.2018.03.006
![]() |
[31] | K. Kreutz-Delgado, The complex gradient operator and the \mathbb{C}\mathbb{R}-calculus, 2009, arXiv: 0906.4835. |
[32] |
X. Chen, Q. Song, Global stability of complex-valued neural networks with both leakage time delay and discrete time delay on time scales, Neurocomputing, 121 (2013), 254–264. http://doi.org/10.1016/j.neucom.2013.04.040 doi: 10.1016/j.neucom.2013.04.040
![]() |
[33] |
Q. Li, J. Liang, Dissipativity of the stochastic Markovian switching CVNNs with randomly occurring uncertainties and general uncertain transition rates, Int. J. Syst. Sci., 51 (2020), 1102–1118. http://doi.org/10.1080/00207721.2020.1752418 doi: 10.1080/00207721.2020.1752418
![]() |
[34] |
X. Chen, Q. Song, Y. Liu, Z. Zhao, Global \mu-stability of impulsive complex-valued neural networks with leakage delay and mixed delays, Abstr. Appl. Anal., 2014 (2014), 397532. http://doi.org/10.1155/2014/397532 doi: 10.1155/2014/397532
![]() |
[35] |
U. Humphries, G. Rajchakit, R. Sriraman, P. Kaewmesri, P. Chanthorn, C. P. Lim, et al., An extended analysis on robust dissipativity of uncertain stochastic generalized neural networks with Markovian jumping parameters, Symmetry, 12 (2020), 1035. http://doi.org/10.3390/sym12061035 doi: 10.3390/sym12061035
![]() |
[36] |
P. Chanthorn, G. Rajchakit, J. Thipcha, C. Emharuethai, R. Sriraman, C. P. Lim, et al., Robust stability of complex-valued stochastic neural networks with time-varying delays and parameter uncertainties, Mathematics, 8 (2020), 742. http://doi.org/10.3390/math8050742 doi: 10.3390/math8050742
![]() |
[37] |
P. Chanthorn, G. Rajchakit, U. Humphries, P. Kaewmesri, R. Sriraman, C. P. Lim, A delay-dividing approach to robust stability of uncertain stochastic complex-valued Hopfield delayed neural networks, Symmetry, 12 (2020), 683. http://doi.org/10.3390/sym12050683 doi: 10.3390/sym12050683
![]() |
[38] |
M. Liu, X. Wang, Z. Zhang, Z. Wang, Dissipativity analysis of complex-valued stochastic neural networks with time-varying delays, IEEE Access, 7 (2019), 165076–165087. http://doi.org/10.1109/ACCESS.2019.2953244 doi: 10.1109/ACCESS.2019.2953244
![]() |
[39] | P. Gahinet, A. Nemirovskii, A. J. Laub, M. Chilal, The LMI control toolbox, In: Proceedings of 1994 33rd IEEE conference on decision and control, 1994. http://doi.org/10.1109/CDC.1994.411440 |
[40] |
Q. Li, J. Liang, W. Gong, Stability and synchronization for impulsive Markovian switching CVNNs: matrix measure approach, Commun. Nonlinear Sci., 77 (2019), 126–140. http://doi.org/10.1016/j.cnsns.2019.04.022 doi: 10.1016/j.cnsns.2019.04.022
![]() |
[41] |
J. Zhou, T. Cai, W. Zhou, D. Tong, Master-slave synchronization for coupled neural networks with Markovian switching topologies and stochastic perturbation, Int. J. Robust Nonlin., 28 (2018), 2249–2263. http://doi.org/10.1002/rnc.4013 doi: 10.1002/rnc.4013
![]() |
[42] |
K. Cui, J. Zhu, C. Li, Exponential stabilization of Markov jump systems with mode-dependent mixed time-varying delays and unknown transition rates, Circuits Syst. Signal Process., 38 (2019), 4526–4547. http://doi.org/10.1007/s00034-019-01085-2 doi: 10.1007/s00034-019-01085-2
![]() |
[43] |
R. Xu, Y. Kao, M. Gao, Finite-time synchronization of Markovian jump complex networks with generally uncertain transition rates, T. I. Meas. Control, 39 (2017), 52–60. http://doi.org/10.1177/0142331215600046 doi: 10.1177/0142331215600046
![]() |
[44] |
P. Chanthorn, G. Rajchakit, S. Ramalingam, C. P. Lim, R. Ramachandran, Robust dissipativity analysis of Hopfield-type complex-valued neural networks with time-varying delays and linear fractional uncertainties, Mathematics, 8 (2020), 595. http://doi.org/10.3390/math8040595 doi: 10.3390/math8040595
![]() |
[45] |
G. Rajchakit, R. Sriraman. Robust passivity and stability analysis of uncertain complex-valued impulsive neural networks with time-varying delays, Neural Process. Lett., 53 (2021), 581–606. http://doi.org/10.1007/s11063-020-10401-w doi: 10.1007/s11063-020-10401-w
![]() |
1. | Haiyang Zhang, Lianglin Xiong, Hongxing Chang, Jinde Cao, Zhang Yi, Discrete event-triggered security control for Markovian CVNNs with additive time-varying delays under random deception attacks, 2024, 361, 00160032, 107324, 10.1016/j.jfranklin.2024.107324 | |
2. | A. Nasira Banu, K. Banupriya, V. Dhanya, Robust passivity analysis for uncertain stochastic switched inertial neural networks with time‐varying delay under a new state‐dependent switching law, 2024, 47, 0170-4214, 7742, 10.1002/mma.9999 | |
3. | Xiao Ge, Xinzuo Ma, Yuanyuan Zhang, Han Xue, Seakweng Vong, Stability analysis of systems with additive time-varying delays via new bivariate quadratic reciprocally convex inequality, 2024, 9, 2473-6988, 36273, 10.3934/math.20241721 |
Value of \eta | 0 | 0.1 | 0.2 | 0.4 | 0.5 | 0.9 | 1.0 |
Theorem 3.1 | 0.4627 | 0.7307 | 0.7522 | 1.0139 | 0.8350 | 2.0295 | 2.1968 |
Corollary 3.1 | 1.0302 | 1.0302 | 1.3021 | 1.1357 | 1.2671 | 2.2713 | 2.6587 |
Value of \eta | 0 | 0.1 | 0.2 | 0.4 | 0.5 | 0.9 | 1.0 |
Theorem 3.2 | 105.3514 | 4393.0000 | 41.3763 | 19.9482 | 87.5144 | 35.2150 | 1306.7000 |
Corollary 3.2 | 13.4568 | 12.6961 | 9.7946 | 16.5451 | 14.0339 | 7.2976 | 28.9809 |
Value of \eta | 0 | 0.1 | 0.2 | 0.4 | 0.5 | 0.9 | 1.0 |
Theorem 3.1 | 0.4627 | 0.7307 | 0.7522 | 1.0139 | 0.8350 | 2.0295 | 2.1968 |
Corollary 3.1 | 1.0302 | 1.0302 | 1.3021 | 1.1357 | 1.2671 | 2.2713 | 2.6587 |
Value of \eta | 0 | 0.1 | 0.2 | 0.4 | 0.5 | 0.9 | 1.0 |
Theorem 3.2 | 105.3514 | 4393.0000 | 41.3763 | 19.9482 | 87.5144 | 35.2150 | 1306.7000 |
Corollary 3.2 | 13.4568 | 12.6961 | 9.7946 | 16.5451 | 14.0339 | 7.2976 | 28.9809 |