
Citation: Chunhong Li, Sanxing Liu. Stochastic invariance for hybrid stochastic differential equation with non-Lipschitz coefficients[J]. AIMS Mathematics, 2020, 5(4): 3612-3633. doi: 10.3934/math.2020234
[1] | Meijiao Wang, Qiuhong Shi, Maoning Tang, Qingxin Meng . Stochastic differential equations in infinite dimensional Hilbert space and its optimal control problem with Lévy processes. AIMS Mathematics, 2022, 7(2): 2427-2455. doi: 10.3934/math.2022137 |
[2] | Yongxiang Zhu, Min Zhu . Well-posedness and order preservation for neutral type stochastic differential equations of infinite delay with jumps. AIMS Mathematics, 2024, 9(5): 11537-11559. doi: 10.3934/math.2024566 |
[3] | Minyu Wu, Xizhong Yang, Feiran Yuan, Xuyi Qiu . Averaging principle for two-time-scale stochastic functional differential equations with past-dependent switching. AIMS Mathematics, 2025, 10(1): 353-387. doi: 10.3934/math.2025017 |
[4] | Jingjing Yang, Jianqiu Lu . Stabilization in distribution of hybrid stochastic differential delay equations with Lévy noise by discrete-time state feedback controls. AIMS Mathematics, 2025, 10(2): 3457-3483. doi: 10.3934/math.2025160 |
[5] | Hiroshi Takahashi, Ken-ichi Yoshihara . Approximation of solutions of multi-dimensional linear stochastic differential equations defined by weakly dependent random variables. AIMS Mathematics, 2017, 2(3): 377-384. doi: 10.3934/Math.2017.3.377 |
[6] | Chao Wei . Parameter estimation for partially observed stochastic differential equations driven by fractional Brownian motion. AIMS Mathematics, 2022, 7(7): 12952-12961. doi: 10.3934/math.2022717 |
[7] | Jiali Wu, Maoning Tang, Qingxin Meng . A stochastic linear-quadratic optimal control problem with jumps in an infinite horizon. AIMS Mathematics, 2023, 8(2): 4042-4078. doi: 10.3934/math.2023202 |
[8] | Fawaz K. Alalhareth, Seham M. Al-Mekhlafi, Ahmed Boudaoui, Noura Laksaci, Mohammed H. Alharbi . Numerical treatment for a novel crossover mathematical model of the COVID-19 epidemic. AIMS Mathematics, 2024, 9(3): 5376-5393. doi: 10.3934/math.2024259 |
[9] | Rahman Ullah, Muhammad Farooq, Faiz Faizullah, Maryam A Alghafli, Nabil Mlaiki . Fractional stochastic functional differential equations with non-Lipschitz condition. AIMS Mathematics, 2025, 10(3): 7127-7143. doi: 10.3934/math.2025325 |
[10] | Leonid Shaikhet . About one unsolved problem in asymptotic $ p $-stability of stochastic systems with delay. AIMS Mathematics, 2024, 9(11): 32571-32577. doi: 10.3934/math.20241560 |
In the paper, we consider the stochastic invariance for the following hybrid stochastic differential equations (HSDEs)
dX(t)=f(X(t),r(t))dt+g(X(t),r(t))dw(t),t>t0, | (1.1) |
where r(t) is a Markov chain taking values in M={1,2,3,⋯,N} with generator Γ=(γij)N×N (see [1]). This equation can be regarded as the result of the following equation
dX(t)=f(X(t),i)dt+g(X(t),i)dw(t),1⩽i⩽N, | (1.2) |
switching from one to the others according to the movement of the Markov chain, and the initial condition
X(0)=ξ∈Rd,r(t0)=i0∈M, |
where f:Rd×M→Rd and g:Rd×M→Rd×d.
Jump system is a hybrid system with state vector that has two components X(t) and r(t). The first one is in general referred to as the state, and the second one is regarded as the mode. In its operation, the jump system will switch from one mode to another in a random way, and the switching between the modes is governed by a Markov process with discrete and finite state space. Due to the increasing demands from real systems and phenomena in which both continuous dynamics and discrete events are involved, hybrid models have been increasing considered for decades and have received a lot of attention, for example: existence and uniqueness of solutions, approximate solutions(see [1,2]); stability theory (see [3,4,5,6,7]); almost sure stability (see [8,9,10]). As, for stochastic invariance, to the best of our knowledge, there is no paper which investigates the stochastic invariance theory for hybrid stochastic differential equations. Thus, we will make the first attempt to study the problem on the non-Lipschitz coefficients condition. Our results are inspired by the one [11] where stochastic invariance theory for Eq 1.2 with r(t)≡1 has been studied.
Stochastic invariance is a method to study the space state of solutions which people usually consider it from the view of attractor, for example (see [12,13]). The first stochastic invariance result can be found in [14]. After then, there exist already a lot of literatures concerning invariance as well as the connected notion of viability; characterizations of both have been expressed through stochastic tangent cones (see [11,15]), distance function (see [16]), martingale decomposition theory (see [17]) or other approaches. Although these approaches were different, they had at least one thing in common: they had to make a choice between the assumptions on smoothness of the domain and the regularity of the coefficients, which was a restriction for these existing results to apply in practice. To overcome this difficulty and to analyze the stochastic invariance of solutions for the hybrid stochastic differential equations with non-Lipschitz coefficients, we adopt the method developed by Jaber and Bouchard [11]. The method is based on the properties of Itˆo integral and the compactness of solution space, and which require neither smoothness of the domain nor Lipschitz coefficients condition. In the following Theorem 3.1, we first check that the solution space of Eq 1.2 is also locally compact (see, Remark 1.3), then we prove the sufficiency of Theorem 3.1 by the maximum principle, this is the same as Theorem 2.3 in [11]. Under some additional constraint (martingale measure ˜u(dt,dy)=v(dt,dy)−u(dy) is independent of Brownian motion W) on the jump system, we prove the necessary by twice Itˆo integral, it is the same process as one in [11] except for extra ∑Nj=1γijϕ(x,j) term.
The rest of this paper is organized as follows. In Section 2, we give some definitions and preliminaries. In Section 3, we state the main results and their proof. In Section 4, we compare the stochastic invariance with robustness respect to hybrid systems. In Section 5, we provide a numerical simulation and the most probable phase portrait to illustrate our main results.
Let Rd be a d-dimensional Euclidean space endowed with the inner product ⟨u,v⟩:=d∑i=1uivi for u,v∈Rd and the Euclidean norm |u|:=⟨u,u⟩12, for u∈Rd. Let Md denote the collect of all d×d matrices with real entries and Aij means the entry of the i-th row and the j-th column. Sd stands for the cone of symmetric d×d matrices. We use the standard notion Id to denote the d×d identity matrix. Given x=(x1,⋯,xd)∈Rd, diag[x] denotes the diagonal matrix whose i-th diagonal component is xi. If A is a symmetric positive semi-definite matrix, then A12 denotes its symmetric square-root. By a filtered probability space, we mean a quadruple (Ω,F,{Ft}t⩾0,P), where {Ft}t⩾0 is a σ-algebra of F and satisfies the usual conditions, i.e., (Ω,F,P) is a complete probability space, and F0 contains all P-null sets of F, for each t⩾0,Ft+:=⋂s>tFs=Ft. Let {Wt}t⩾0 be an m-dimensional Brownian motion defined on the filtered probability space (Ω,F,{Ft}t⩾0,P). Let r(t),t⩾t0 be a right continuous Markov chain on the same probability space taking values in a finite state space M={1,2,⋯,N} with generator Γ=(γij)m×m given by
P(r(t+δ)=j|r(t)=i)={γijδ+o(δ),i≠j,1+γijδ+o(δ),i=j, |
in which δ>0 and limδ→0+o(δ)δ=0. Here γij⩾0 is the transition rate from i to j, if i≠j, while
γii=−∑i≠jγij. |
We always assume that r(t) is independent of w(t). It is known that almost all sample paths of r(t) are right-continuous step functions with a finite number of simple jumps in any finite subinterval of R+0. We stress that the Markov chain r(t) can be represented as a stochastic integral with respect to a Poisson random measure. Indeed, let Δij, i≠j, be consecutive (with respect to the lexicographic ordering on M×M), left closed and right open intervals of the real line each having length λij. Define a function
η:M×R→R |
by
η(i,y)={j−i,ify∈Δij,0,otherwise. |
Then
dr(t)=∫Rη(r(t−),y)v(dt,dy), |
where v(dt,dy) is a Poission random measure with intensity dt×μ(dy),μ(⋅) is the Lebesgue measure on R.
Let us consider the following assumptions:
(H1) f and g satisfy the linear growth condition. That is, there exists a constant K>0 such that
|f(x,i)|2∨|g(x,i)|2⩽K(1+|x|2) | (2.1) |
for all x∈Rd and i∈M;
(H2) C can be extended to a C1.1loc(Rd,Sd) function that C=ggT on D;
(H3) Equation 1.2 has an equilibrium at X(t)=0,i.e.f(0,i)=0 and g(0,i)=0, for all i∈M.
The first two conditions are the same thing as (H1) and (H2) of Theorem 2.3 in [11]. Because of the characteristics of the jump system, we increased the condition (H3) to ensure the continuity of the sample paths of solutions for Eq 1.2.
Definition 2.1. A closed subset D⊂Rd is said to be stochastically invariant with respect to Eq 1.2 if, for all x∈D, there exists a weak solution (X, W) of Eq 1.2 starting at X(0)=x such that X(t)∈D almost surely for all t⩾0.
Let C∞0(Rd) denote the space of the real-valued, infinitesimal differentiable functions on Rd with compact support. For any ϕ∈C∞0(Rd), we define an operator L:
Definition 2.2. Let L be a semi-elliptic differential operator of the form(see [18], Definition 8.3.2)
Lϕ(X(t),i)=∑jfj(X(t),i)∂ϕ(X(t),i)∂xi+12∑j,kgjk(X(t),i)∂2ϕ(X(t),i)∂xj∂xk+N∑kγi,kϕ(X(t),k). | (2.2) |
Where i∈M and the coefficients [fj(X(t),i)]=f(X(t),i),[gjk(X(t),i)]=12ggT((X(t),i)) are locally bounded Borel measurable functions on Rn. Then we say that a probability measure ˜Pxon((Rn)[0,∞),B) solves the martingale problem for L (starting at x) if the process
M(t)=ϕ(X(t),i)−ϕ(X(0),i)−∫t0Lϕ(X(s),i)ds | (2.3) |
is a martingale w.r.t.Mt.
Then
dϕ(X(t),i)=Lϕ(X(t),i)dt+Dϕg(X(t),i)dw(t)+∫R[ϕ(X(t),i+η(i,y))−ϕ(X(t),i)]˜μ(dt,dy). |
We assume that ˜u is independent of B while ˜μ(dt,dy)=v(dt,dy)−μ(dy) is a martingale measure (see [19]).
Remark 2.1. L is a semi-elliptic differential operator, not a generator of an Itˆo diffusion X(t) given by Eq 1.2 (see [18], P145).
Definition 2.3. A probability measure Qξ on (Ω,M) solves the martingale problem associated with f and C=ggT with initial data ξ if
1. Qξ(X∘(0)=ξ)=1.
2. ϕ(X∘(t))−∫t0(Lϕ)(X∘(0))ds,t⩾0 is a (Mt,Qξ)-martingale for all ϕ∈C∞0(Rd).
Now we shall state the relation between the martingale problem of Definition 2.3 and its weak solutions. Firstly, assume that there exists a weak solution of Eq 1.2 with initial data ξ, then there exists a sextuple (Ω,F,Ft,P,B,X) such that
Xm(t)=ξm(0)+∫t0fm(X(s),i)ds+d∑j=1∫t0gmj(X(s),i)dBj(s),m=1,2,⋯,d;i∈M, |
holds a.s, or equivalently
dXm(t)=fm(X(t),i)dt+d∑j=1gmj(X(s),i)dBj(s). | (2.4) |
Define the probability measure
Qξ(A)=P(X∈A),A∈M. |
Then, Qξ solves the martingale problem of Definition 2.3 for the coefficients f and C=ggT, where T denotes the transpose. For ϕ∈C∞0(Rd×M), applying Itˆo formula, we have
ϕ(X(t),i)=ϕ(X(0),i)+∫t0Lϕ(X(s),i)ds+∫t0Dϕg(X(s),i)dw(s)+∫t0∫R[ϕ(X(s),i+η(i,y))−ϕ(X(s),i)]˜μ(ds,dy),t⩾0. |
Then, we obtain that
M(t)=ϕ(X(t),i)−ϕ(X(0),i)−∫t0Lϕ(X(s),i)ds=∫t0Dϕg(X(s),i)dw(s)+∫t0∫R[ϕ(X(s),i+η(i,y))−ϕ(X(s),i)]˜μ(ds,dy) |
is a (FBt,P)-martingale (Let (FBt)t⩾0 be a complete filtration generated by Brownian motion B). Since Itˆo integrals are martingales, and ˜μ(ds,dy) is martingale measure and which is independent of B. Then by transformation of measures, we get
ϕ(X∘(t),i)−∫t0Lϕ(X∘(s),i)ds,t⩾0 |
is a (Mt,Qξ)-martingale on the canonical space Ω=C(M×R;Rd). This shows that Qξ (which is a distribution of the solution process) solves the martingale problem.
Definition 2.4. Let N1D, N2D and N1,proxD be respectively the first order normal cone, the second order normal cone and proximal cone at the point x
N1D={u∈Rd:⟨u,y−x⟩⩽o(‖y−x‖),∀y∈D}, |
N2D(x)={(u,v)∈Rd×Sd:⟨u,y−x⟩+12⟨v(y−x),y−x⟩⩽o(‖y−x‖2),∀y∈D}, |
N1,proxD(x)={u∈Rd,‖u‖=dD(x+u)}, |
in which dD is the distance function to D.
Remark 2.2. (see [8], Theorem 3.23) Let p⩾2 and x0∈LpFt0(Ω,Rd), Assume that the linear growth condition (H1) and (H3) hold. Then
E|X(t)−X(s)|p⩽C∗|t−s|p2,t0⩽s<t⩽T, | (2.5) |
where C∗=2p−2(1+E|X(0)|p)epα(T−t0)(|2(T−t0)|p2+|p(p−1)|p2) and α=√K+K(p−1)2.
Hence, Kolmogorov's continuity criterion ensures that the sample paths of X are (locally) ϵ-Hölder continuous for any ϵ∈(0,12).
Remark 2.3. (see [20], Theorem 2.1) When the diffusion coefficient g is independent of the past history, we show that the trajectory field has a version whose sample functions are almost all compact.
In this section, we shall give the main results of this paper. First of all, we need to prepare several lemmas for the latter stochastic invariance analysis.
Lemma 3.1. Assume that C∈C1.1loc(Rd,Sd). Let X(0)=x∈D and i∈M such that the spectral decomposition of C(x,i) is given by
C(x,i)=Q(x,i)diag[λ1(x,i),⋯,λr(x,i),0,⋯,0]QT(x,i), |
where λ1(x,i)>λ2(x,i)>⋯>λr(x,i)>0 and Q(x,i)QT(x,i)=Id,r⩽d.
Then there exists an open (bounded) neighborhood N(x) of x and two Md−valued measurable functions on Rd
y⟶Q(y,i)=[q1(y,i),⋯,qd(y,i)] |
and
y⟶Λ(y,i)=diag[λ1(y,i),⋯,λd(y,i)] |
such that
(i) C(y,i)=Q(y,i)Λ(y,i)QT(y,i)andQ(y,i)QT(y,i)=Id, for all y∈Rd;
(ii) λ1(y,i)>λ2(y,i)>⋯>λr(y,i)>max{λj(x,i),r+1⩽j⩽d}⋁0, for all y∈N(x,i);
(iii) ˉg:y⟶ˉQ(y,i)ˉΛ(y,i)12 and ˉg∈C1,1(N(x,i),Md), in which ˉQ=[q1,⋯,qr,0,⋯,0] and ˉΛ=diag[λ1,⋯,λr,0,⋯,0].
Moreover, we have
⟨u,d∑j=1Dˉgj(x,i)ˉgj(x,i)⟩=⟨u,d∑j=1DCj(x,i)(CC+)j(x,i)⟩,forallu∈kerC(x,i). | (3.1) |
Proof. Since u∈kerC(x,i) satisfies
uT¯Q(x,i)=uT¯g(x,i)=0. |
Since C∈C1,1loc, we can get ¯C=¯g¯gT is differentiable at (x,i), combing with Definition 7.4 and 7.6, and the fact that ¯g¯gT=C¯Q¯QT and ¯Q(x,i)¯QT(x,i)=C(x,i)C(x,i)+, we have
⟨u,d∑j=1D¯gj(x,i)¯gj(x,i)⟩=d∑j=1uTD¯gj(x,i)ej¯gj(x,i)ej=d∑j=1uT(eTj⊗Id)D¯gj(x,i)¯gj(x,i)ej=d∑j=1ej(Id⊗uT)D¯gj(x,i)¯gj(x,i)ej=Tr[(Id⊗uT)D¯g(x,i)¯g(x,i)]=Tr[(Id⊗uT)DC(x,i)C(x,i)C(x,i)+]=⟨u,d∑j=1DCj(x,i)(CC+)j(x,i)⟩. |
Lemma 3.2. Given g∈C1,1b(Rd,Sd)(i.e. g is differentiable with a bounded and a globally Lipchitz derivative). Then
C:=g2∈C1,1loc(Rd,Sd+),⟨u,d∑j=1Dgj(x,i)gj(x,i)⟩=⟨u,d∑j=1DCj(x,i)(CC+)j(x,i)⟩forallx∈Dandu∈kerg(x,i). |
Proof. Fix (x,i)∈D and u∈kerg(x,i). Since
C(x,i)C(x,i)+g(x,i)=[QΛQTQΛ+QT](x,i)g(x,i)=g(x,i), |
we have
Tr[(Id⊗uT)DC(x,i)C(x,i)C(x,i)+]=Tr(Id⊗uT)[(gT(x,i)⊗Id)Dg(x,i)+(Id⊗g(x,i))Dg(x,i)]C(x,i)C(x,i)+=Tr[(gT(x,i)⊗uT)Dg(x,i)C(x,i)C(x,i)+]=Tr[(Id⊗uT)Dg(x,i)g(x,i)]. |
Combining with the above equality, we get
⟨u,d∑j=1Dgj(x,i)gj(x,i)⟩=d∑j=1uTD(gj(x,i)ej)gj(x,i)ej=d∑j=1uT[(ej⊗Id)D(gj(x,i))+g(x,i)(Id⊗Id))Dej]gj(x,i)ej=d∑j=1uT[(ej⊗Id)Dgj(x,i))gj(x,i)ej]=d∑j=1eTj(Id⊗uT)Dgj(x,i)gj(x,i)ej=Tr[(Id⊗uT)Dg(x,i)g(x,i)]=Tr[(Id⊗uT)DC(x,i)C(x,i)C(x,i)+]=⟨u,d∑j=1DCj(x,i)(CC+)j(x,i)⟩. |
Lemma 3.3. Let {Wt}t⩾0 be a d-dimensional Brownian motion on a filtered probability space (Ω,F,{Ft}t⩾0,P). Let α∈Rd, {βt}t⩾0∈Rd,{δt}t⩾0∈Md and {θt}t⩾0∈R satisfy
(1) β is bounded;
(2) ∫t0‖δs‖2ds<∞ for all t⩾0;
(3) there exists a random variable η>0, such that
∫t0‖δs−δ0‖2ds=∘(t1+η);
(4) θ is continuous at 0 a.s..
Suppose that for all t⩾0,
∫t0θsds+∫t0(α+∫s0βrdr+∫s0δrdw(r))Tdw(s)⩽0. | (3.2) |
Then,
(a) α=0;
(b) −δ0∈Sd+;
(c) θ0−12Tr(δ0)⩽0.
Proof. Observe that the conditions of Lemma 3.3 are different from the one ([21], Lemma 2.1), but both results are the same. So our main aim is to reduce the case ([21], Lemma 2.1) which holds under (e.g. Rt=o(t)). Since wi(t)=2∫t0wi(s)dwi(s)+t and inequality (3.2), we have
(θ0−12Tr(δ0))t+d∑i=1αiwi(t)+d∑i=1δii02(wi(t))2+∑1⩽i≠j⩽dδij0∫t0wi(s)dwj(s)+Rt⩽0, |
where
Rt=∫t0(θs−θ0)ds+∫t0(∫s0βrdr)Tdw(s)+∫t0(∫s0(δr−δ0)dw(r))Tdw(s)=R1t+R2t+R3t. |
Since θ is continuous at 0, we get R1t=∘(t)a.s.. Moreover, in view of ([22], Proposition 3.9), we have R2t=∘(t)a.s., as β is bounded. Define Mij=δij−δij0 and Mi=∫⋅0∑dj=1Mijrdwj(r) for i,j∈{1,2,⋯,d}. We can deduce that (s1+η)=∘⟨Mi⟩a.s.. By using the Dambis-Dubins-Schwarz theorem, we know that Mis is a time changed Brownian motion, By the law of iterated logarithm for Brownian motion (s1+η2)=∘(Mis)2a.s.. We have (t2+η2)=∘⟨R3t⟩a.s.. By applying the Dambis-Dubins-Schwarz theorem and law of iterated logarithm for Brownian motion again, we get that R3t=∘(t)a.s..
Theorem 3.1. (Invariance characterization) Let D be closed. Assume that f, g and C are continuous and satisfy assumptions (H1)−(H3). Then, the set D is stochastically invariant with respect to the Eq 1.2 if and only if
C(x,i)u=0, | (3.3) |
⟨u,f(x,i)−12d∑j=1DCj(CC+)j(x,i)⟩+N∑j=1γijϕ(x,j)⩽0, | (3.4) |
for any initial data X(0)=x∈D and all u∈N1D(x,i), i∈M.
In this subsection, we prove that the conditions of Theorem 3.1 are necessary for D. Our general strategy is similar to [11]. The main idea consists of using the spectral decomposition of C in the form QΛQT in which Q is an orthogonal matrix and Λ is diagonal positive semi-definite. Then divide the proof into 3 cases (Ⅰ, Ⅱ, Ⅲ).
Ⅰ. The case of distinct and non-zero eigenvalues
Since C∈C1,1loc(Rd,Sd), C(x) has distinct and non-zero eigenvalues, we can reduce to the case where Q and Λ12 are smooth enough and Λ has strictly positive entries. The dynamics of X can be written as
dX(t)=f(X(t),i)dt+Q(X(t),i)Λ(X(t),i)12dBt, |
where B=∫0Q(X(s),i)Tdw(s) is a Brownian motion. We consider a smooth function ϕ:Rd→R such that maxDϕ=ϕ(x,i) for x=X(0) and there exists a constant M1>0, such that D2ϕ(X(t),i)⩽M1 for all t>0. Since D is stochastically invariant, ϕ(X(t),i)⩽ϕ(x,i), for all t⩾0. Further, by using Itˆo formula, we get
∫t0Lϕ(X(s),i)ds+∫t0∫R[ϕ(X(t),i+η(i,y))−ϕ(X(t),i)]˜μ(dt,dy)+∫t0Dϕg(X(s),i)dw(s)⩽0. |
Let (FBt)t⩾0 be a complete filtration generated by Brownian motion B, since B and ˜u(dt,dy) is independent, we get
∫t0EFB[Lϕ(X(s),i)]ds+∫t0EFB[Dϕg(X(s),i)]dw(s)⩽0. |
Applying Itˆo formula to Dϕg(X(s),i), we have
∫t0EFB[Lϕ(X(s),i)]ds+∫t0{EFB[Dϕg(X(0),i)]+∫s0EFB[L(Dϕ)g(X(r),i)]dr+∫s0EFB[D(Dϕg)g(X(r),i)]dw(r)}dw(s)⩽0. |
First note that EFB[L(Dϕ)g(X(s),i)] is bounded, the condition
∫s0EFB[D(Dϕg)g(X(r),i)]dw(r)=∫s0EFB[D2ϕggT(X(r),i))+(Id⊗Dϕ)DggT(X(r),i)]dw(r)<∞ |
holds and Lϕ(X(s),i) is continuous at 0, all these follow from (H3) and the fact that smoothness of Q and Λ, and ϕ has compact support. Moreover,
F:=D(Dϕg)g(X(s),i)=D2ϕggT(X(s),i)+(Id⊗Dϕ)DggT(X(s),i) |
is Lipschitz's.
Combining these with Remark 2.2 and (H1), we can find constants L′>0 and M1>0 such that
E[|F(X(s),i)−F(X(r),i)|4]⩽M1E|X(s)−X(r)|4⩽L′|s−r|2. |
Thus, by using Lemma 3.3, we have
Dϕg(X(0),i)=DϕggT(x,i)=0 |
and
Lϕ(X(0),i)−12Tr[D(Dϕg)g(X(0),i)]=Dϕf(X(0),i)−12(Id⊗Dϕ)DggT(X(0),i)+N∑j=1γijϕ(X(0),j)=⟨Dϕ,f−12DggT⟩(x,i)+N∑j=1γijϕ(x,j)⩽0. |
Under appropriate regularity conditions, we can choose a suitable test function ϕ such that Dϕ(x,i)=uT. Further, by using Lemma 3.2 we get (3.3) and (3.4).
Ⅱ. The case of distinct eigenvalues
Assume that D is stochastically invariant with respect to the Eq 1.2. Let X(0)=x∈D and C has distinct eigenvalues, then (3.3) and (3.4) hold at point x, for all u∈N1D(x,i). Proof. Let (X,W) denote a weak solution of Eq 1.2 with the initial data X(0)=x such that X(t)∈D, for all t⩾0. If x is in the interior of D, then N1,proxD(x)={0} and (3.3) and (3.4) clearly hold. Therefore, from now on, we assume that x∈∂D and u∈N1,proxD(x,i).
Next, divide the rest proof into 4 Steps.
Step 1. There exists a function ϕ∈C∞b(Rd,R) with compact support in N(x) such that maxDϕ=ϕ(x)=0 and Dϕ(x)=uT (see [23], Chapter 6. E).
Step 2. Since D is invariant under the point x, ϕ(X(t),i)⩽ϕ(x,i), for all t⩾0. Applying Itˆo formula to ϕ(X(t),i), we have
∫t0Lϕ(X(s),i)ds+∫t0Dϕg(X(s),i)dw(s)+∫t0∫R[ϕ(X(t),i+η(i,y))−ϕ(X(t),i)]˜μ(dt,dy)=∫t0Lϕ(X(s),i)ds+∫t0Dϕ(X(s),i)QΛ12Q(X(s),i)dw(s)+∫t0∫R[ϕ(X(t),i+η(i,y))−ϕ(X(t),i)]˜μ(dt,dy)⩽0. | (3.5) |
Define a Brownian motion ¯B=∫.0Q(X(s),i)Tdw(s). Recall that Q is orthogonal together with
¯B=¯Λ(X(s),i)¯Λ(X(s),i)+B=(B1,⋯,Br,0,⋯,0)T,¯B⊥=(Id−¯Λ(X(s),i))¯Λ(X(s),i)+)B=(0,⋯,0,Br+1,⋯,Bd). |
Since Q¯Λ12=¯Q¯Λ12, the left-hand side of inequality (3.5) can be written in the form
∫t0Lϕ(X(s),i)ds+∫t0Dϕ¯g(X(s),i)d¯Bs+∫t0DϕQ¯Λ12(X(s),i)d¯Bs⊥+∫t0∫R[ϕ(X(s),i+η(i,y))−ϕ(X(s),i)]˜μ(ds,dy)⩽0. | (3.6) |
Let (F¯Bt)t⩾0 be a complete filtration generated by ¯B, combining with ([24], Lemma 14.2) and the fact that the martingale ¯B is independent of ¯B⊥ and ˜u. Then, we have
∫t0EF¯Bs[Lϕ(X(s),i)]ds+∫t0EF¯Bs[Dϕ¯g(X(s),i)]d¯Bs⩽0. |
Applying Itˆo formula to Dϕ¯g(X(s),i), we get
∫t0EF¯Bs[Lϕ(X(s),i)]ds+∫t0{EF¯Bs[Dϕ¯g(X(0),i)]+∫s0EF¯Bs[LDϕ¯g(X(r),i)]dr+∫s0EF¯Bs[D(Dϕ¯g)¯g(X(r),i)]dw(r)}d¯Bs⩽0. |
Step 3. Now we check that we can apply Lemma 3.3. First note that all the above processes are bounded, because of Lemma 3.1, (H1) and the fact that ϕ has compact support. In addition, given T>0, (H3) and the independence of the increments of ¯B imply that θs=EF¯Bs[Lϕ(X(s),i)] for all s⩽T, due to that θ is continuous at 0 a.s. Similarly, δs=EF¯BsD(Dϕ¯g)¯g(X(s),i) on [0, T]. Moreover, assume that
F:=D(Dϕ¯g)¯g(X(s),i)=D2ϕ¯g¯g(X(s),i)+[Id⊗Dϕ]D¯g¯g(X(s),i), |
since D2ϕ(X(s),i) and Dϕ(X(s),i) are bounded, by using Jensen's inequality, Remark 2.2 and (H1), we can derive
E[|δs−δr|4]⩽E[|F(X(s),i)−F(X(r),i)|4]⩽L′|s−r|2, |
for all s,r∈[0,1], where L′ is a positive constant. By Kolmogorov's continuity criterion, r has ϵ-Hölder sample paths for 0<ϵ<14. In particular ∫t0‖δs−δ0‖2ds=∘(t1+ϵ) for 0<ϵ<12.
Step 4. In view of Step 3, by using Lemma 3.3, we have
Dϕ¯g(X(0),i)=0 | (3.7) |
and
Lϕ(X(0),i)−12Tr[D(Dϕ¯g)¯g(X(0),i)]⩽0. | (3.8) |
Applying (3.7), we get
Dϕˉg(X(0),i)=Dϕˉg(x,i)=uTQΛ12(x,i)=uTQΛ12Λ12QT(x,i)=uTC(x,i)=0 |
or equality (3.7) implies that
C(x,i)u=0, |
owing to the symmetry of C(x). In terms of inequality (3.8), Dϕ(x,i)=uT and Definition 7.6, we have
Lϕ(X(0),i)−12Tr[D2ϕ(X(0),i)¯g¯g(X(0),i)+(Id⊗Dϕ)D¯g¯g(X(0),i)]=Dϕf(x,i)−12Tr[(Id⊗uT)D¯g¯g(x,i)]+N∑j=1γijϕ(x,j)⩽0. |
which is equivalent to (3.4) by using equality (3.1) and Lemma 3.2.
Ⅲ. The case of the same eigenvalues.
Proposition 3.1. Assume that D is stochastically invariant with respect to the Eq 1.2 and C has the same eigenvalues. Then conditions (3.3) and (3.4) hold for all x=X(0)∈D and u∈N1D(x,i), i∈M.
Proof. Since C has the same eigenvalues, let λ1(x,i)⩾⋯⩾λd(x,i), we can perform a change of variable such that λi(x,i) satisfies the conditions of the case Ⅰ or Ⅱ. First, we assume that
Aε=Q(x,i)diag[√1−ε,√(1−ε)2,⋯,√(1−ε)d]QT(x,i), |
for 0<ε<1. Since D is invariant with respect to X, hence, Dε=AεD is invariant with respect to Xε:=AεX..
Note that
dXε=fε(Xε,i)dt+Cε(Xε,i)12dw(t), | (3.9) |
where fε=Aεf((Aε)−1) and Cε:=AεC((Aε)−1)(Aε)T have the same regularity and growth as f and C. On the one hand, C has nonzero eigenvalue, the positive eigenvalues of Cε are all distinct at xε=Aεx, as Cε(xε,i)=Q(x,i)diag[(1−ε)λ1(x,i),⋯,(1−ε)dλd(x,i)]Q(x,i)T. Therefore, we can apply the case Ⅰ to ((Xε,i),Dε), then we get
{Cε(xε,i)uε=0,⟨uε,fε(xε,i)−12∑dj=1DCjε(xε,i)(CεC+ε)j(xε,i)⟩+∑Nj=1γijϕ(x,j)⩽0, | (3.10) |
on the other hand, C has zero eigenvalues, the positive eigenvalues of Cε are all distinct at xε=Aεx. Therefore, we can apply the case Ⅱ to ((Xε,i),Dε), then we have (3.10). By the Definition 2.3 and continuity of ε, set ε→0 on (3.10), we derive
{C(x,i)u=0,⟨u,f(x,i)−12∑dj=1DCj(x,i)(CC+)j(x,i)⟩+∑Nj=1γijϕ(x,j)⩽0, |
for all u∈N1D(x,i), i∈M.
In this section, we prove that the necessary conditions of Theorem 3.1 are also sufficient.
We will show that (3.3) and (3.4) imply that the generator L of X satisfies the positive maximum principle (see [25], P165): If ϕ∈C2(Rd,R), x∈D, and maxDϕ=ϕ(x)⩾0, we have Lϕ(x)⩽0.
Proposition 3.2. Assume that (3.3) and (3.4) hold for all X(0)=x∈D, and u∈N1D(x,i). Then the generator L satisfies the positive maximum principle.
Proof. It is similar to the proof of the Proposition 4.1 in [11]
Tr(D2ϕC(x,i))⩽−⟨Dϕ(x,i)T,d∑j=1DCj(x,i)(CC+)j(x,i)⟩, |
for any smooth function ϕ such that maxDϕ=ϕ(x,i)⩾0.
Utilizing (3.4), We have
Lϕ(x,i)=Dϕf(x,i)+12Tr(D2ϕggT(x,i))+N∑j=1γijϕ(x,j)⩽Dϕf(x,i)−12⟨Dϕ(x,i)T,d∑j=1DCj(x,i)(CC+)j(x,i)⟩+N∑j=1γijϕ(x,j)=⟨Dϕ(x,i),f(x,i)−12d∑j=1DCj(x,i)(CC+)j(x,i)⟩+N∑j=1γijϕ(x,j)⩽0. |
Remark 3.1. The linear operator L satisfying the positive maximum principle is dissipative, therefore theory of dissipative structure can be used to prove Theorem 3.1.
Proposition 3.3. Under the assumptions of Theorem 3.1, assume that condition (3.3) and (3.4) hold for all x∈D and u∈N1D(x). Then D is stochastically invariant with respect to the Eq 1.2.
Proof. We know that L satisfies the positive maximum principle and the trajectory field of Eq 1.2 has a version whose sample functions are almost all compact (see Remark 2.3), then there exists a compact subset of DE[0,∞), where (E,r) denotes a metric space (see[25], p122). Then, ([25], Theorem 4.5.4) yields the existence of a solution to the martingale problem associated with L with sample paths in the space of cˊadlˊag functions with values in DΔ=D⋃Δ which is the one-point compactification of D. Recall Remark 2.2 and ([25], Proposition 5.3.5), then we get that the solution has a modification with continuous sample paths in D. Finally, ([25], Theorem 5.3.3) implies the existence of weak solution (X,W) such that X(t)∈D almost surely for all t>0.
Remark 3.2. The Theorem 3.1 doesn't hold if r(t) is dependent of w(t). Since Itˆo formula doesn't hold if r(t) is dependent of w(t).
Remark 3.3. When r(t)≡1, Theorem 3.1 is equivalent to the Theorem 2.3 in [11]. When r(t)≡1, τ=0 and G(Xt)=0 hold, Theorem 3.1 is equivalent to the Theorem 3.1 in [26].
Equation (1.2) can be regarded as a stochastically perturbed system of the determined hybrid differential equation,
d(X(t))dt=f(X(t),i). | (4.1) |
We know that Eq 4.1 is asymptotically stable under the conditions of (H1)−(H2). Then there is a natural problem: if the system (1.2) is asymptotically stable, how much stochastic perturbation can this system tolerate without losing the property of asymptotic stability? Such a kind of problems are known as the problem of robust stability, which has received a great deal of attention, for example [27,28,29,30]. Robustness of the ψ-type stability requires that the solution of this Eq 1.2 is almost surely ψ-type stable: lim supt→∞ln|x(t)|lnψ<0. It is obvious that this ψ-type stability implies the almost surely exponential stability when ψ(t)=eαt for any α. The almost surely exponential stability for Eq 1.2 requires that E[2(X(t),i)Tf(X(t),i)+|g(X(t),i)|2]⩽−λE|X(t)|2, for any λ>0. Which implies that
Lϕ(X(t),i)=2(X(t),i)Tf(X(t),i)+|g(X(t),i)|2⩽0, |
when ϕ(x)=x2. Therefore, the generator L satisfies the positive maximum principle, then there exists a weak solution X(t) of Eq 1.2 such that X(t)∈D, where D is stochastically invariant with respect to the Eq 1.2. So robustness can derive stochastic invariance. On the contrary, it is not true, for example below Eq 5.1.
In this section, we will give an example of stochastic invariance which does not support robustness. Moreover, we give the numerical solution of the example and its most probable phase portraits to illustrate the contours of the paths of solution. We restrict to a one-dimensional setting for ease of computation and notation.
Example 5.1. Let w(t) be a scalar Brownian motion. Let r(t) be a right-continuous Markov chain taking values in M={1,2} with generator Γ=(−113−3).
Assume that w(t) and r(t) are independent. Consider a one-dimensional linear stochastic differential equation with Markovian switching of the form
dX(t)=α(r(t))X(t)dt+σ(r(t))X(t)dw(t),t⩾0, | (5.1) |
here α(i)=α(r(t)) and σ(i)=σ(r(t)) (i=1,2). Let us put
α(1)=1,α(2)=2,σ(1)=2,σ(2)=8. |
We can regard Eq 5.1 as the result of the following two equations
dX(t)=X(t)dt+2X(t)dw(t) | (5.2) |
and
dX(t)=2X(t)dt+8X(t)dw(t) | (5.3) |
switching to each other according to the movement of the Markov chain r(t) with the initial data x=X(0). Then, with the previous notations, D⊂R, define ϕ:R×M→R+ by
ϕ(x,i)=βix14 |
with β1=1 and β2=12. Therefore, the first order normal cone given by Definition 2.4 reads
N1D(x)={14βiz−34∈R:⟨14βiz−34,y−x⟩⩽o(‖y−x‖),∀y∈D}. |
Then, the set D⊂R is stochastically invariant with respect to the Eq 5.1 if and only if
{14βiσ(i)x14=0,[⟨14βi,α(i)−12σ2(i)⟩+(γi1β1+γi2β2)]x14⩽0. | (5.4) |
Proof.
Since
|α(i)X(t)|2∨|σ(i)X(t)|2⩽64(1+|X(t)|2), |
hence, condition (H1) is satisfied. We can derive from (5.4)
x=0. | (5.5) |
Choose
maxDϕ(X(t),i)=ϕ(x,i)=0, |
we have
Lϕ(x,1)=Dϕ(x,1)f(x,1)+12Tr(D2ϕgg(x,1)T)+2∑j=1γ1jϕ(x,j)=[14β1α(1)+12×14×(1−14)β1σ2(1)+(γ11β1+γ12β2)]x14=−58x14<0 |
and
Lϕ(x,2)=Dϕ(x,2)f(x,2)+12Tr(D2ϕggT(x,2))+2∑j=1γ2jϕ(x,j)=[14β2α(2)+12×14×(1−14)β2σ2(2)+(γ21β1+γ22β2)]x14=−54x14<0. |
Hence, the generator L satisfies the positive maximum principle, by using of ([25], Theorem 5.3.3), we get that there exists a weak solution (X,W) such that X(t)∈D almost surely for all t>0.
Assume that X(0)=x and ϕ(x,i)=βix14. Since the set D is stochastically invariant with respect to (5.1), there exist
maxDϕ=ϕ(X(0),i), |
ϕ(X(t),i)<ϕ(X(0),i)fort>0. | (5.6) |
Applying Itˆo formula twice to (5.6), we have
∫t0EFB[Lϕ(X(s),i)]ds+∫t0{EFB[Dϕg(X(0),i)]+∫s0EFB[L(Dϕ)g(X(r),i)]dr+∫s0EFB[D(Dϕg)g(X(r),i)]dw(r)}dw(s)⩽0. |
where
{EFB[Lϕ(X(0),i)]=[14βiα(i)−332βiσ(i)2+(γi1+12γi2)]x14,EFB[Dϕg(X(0),i)]=[14βiσ(i)x14]x14,EFB[L(Dϕ)g(X(0),i)]=[−316βiα(i)σ(i)+9128βiσ(i)3+14σ(i)(γi1+12γi2)]x14,EFB[D(Dϕg)g(X(0),i)]=[−316βiσ(i)2+2(Id⊗14βi)σ(i)2]x14. |
Combing with Lemma 3.3, we can derive
{14βiσ(i)x14=0,[⟨14β(i),α(i)−σ2(i)⟩+γi1β1+γi2β2]x14⩽0. |
Since
|α(i)X(t)|2∨|σ(i)X(t)|2⩽64(1+|X(t)|2). | (5.7) |
By using almost surely asymptotic estimates theory ([8], Theorem 3.26), for any given initial data X(0), there exists a solution X(t,ξ) to Eq.5.1 and this solution has the property
lim supt→∞1tln(|X(t)|)⩽√64+√642a.s. |
However, we can not derive
lim supt→∞ln(|X(t)|)t<0a.s. |
Hence, solution of (5.1) is not robust stability (see [28]).
Remark 5.1. For the system (5.1), robustness is a sufficient condition for stochastic invariance, not a necessary condition.
Remark 5.2. In robustness theory one can get how much perturbation the equation can tolerate; in stochastic invariance theory one can get the coverage area of all weak solutions to the equation. This is a balance. In addition, one can still obtain specific initial value in stochastic invariance theory, but one can not get it in robustness theory.
In order to further visualize the solution trajectory of the equation, we will give its numerical solution and the most probable phase portrait of (5.1). We will apply Euler-Maruyama (EM) method to the linear HSDE in Eq 5.1 on [0, 10] and get 104 discrete paths of X(t), just as Figure 1 shows
The red line notes Markovian switching and the green line notes sample paths of Eq 5.1, we see that all solutions approach stable as t>2. We conclude the maximizer at time t respectively from Eqs 5.2 and 5.3 that
X1m(t)=x0exp(−5t),X2m(t)=x0exp(−94t) |
for every x0∈R. Then, the most probable dynamical system is
˙x1m=−5x1m˙x2m=−94x2m, |
this is the same as the corresponding deterministic dynamical system ˙x=αx,α<0. The most probable phase portraits [31] provide geometric pictures of most probable or maximal likely orbits of stochastic dynamical systems, the most probable phase portrait for Eq 5.1 is displayed in Figure 2, the red line notes solutions of (5.2) and the green line notes solutions of (5.3). Analogously to the deterministic differential equations setting, all solutions tend to the origin, in this case the equilibrium point is called a sink.
In this paper, stochastic invariance theory for HSDEs has been studied. The necessary and sufficient conditions for the Theorem 3.1 have been established. This obtained result improved and generalized the result in [11]. Moreover, an example is given to illustrate our Theorem.
For convenience, we collect some definitions and properties of matrix tools intensively used in the proofs throughout the article in this Appendix.
Definition 7.1. Fix A∈Mm,n. The Moore-Penrose pseudoin-verse of A is the unique n×m matrix A+ satisfying: AA+A=A, A+AA+=A+, AA+ and A+A are Hermitian.
Definition 7.2. (the decomposition theorem of Hermitian Matrix) If A∈Md is a real Hermitian Matrix, if and only if there exists a real orthogonal matrix Q∈Md and a real diagonal matrix Λ=diag[(λi)i⩽d]∈Md such that A=QΛQT.
Proposition 7.1. If A∈Md has the spectral decomposition QΛQT for some orthogonal matrix Q∈Md and a diagonal matrix Λ=diag[(λi)i⩽d]∈Md. Then, A+=QΛ+QT in which Λ+=diag[(λ−1iIλi≠0)i⩽d] and AA+=Qdiag[(Iλi≠0)i⩽d]QT. Moreover, if A is positive semi-definite and B12, then B+=Q(Λ+)12QT.
Definition 7.3. Let A=(aij)∈Mm1,n1 and B∈Mm2,n2. The Kronecker product (A⊗B) is defined as the m1m1×n1n2 matrix
A⊗B=(a11B⋯a1n1B⋮⋱⋮am11B⋯am1n1B). |
Definition 7.4. (see [32], Chapter 9) Let A and B be as in Definition 7.3, C∈Mn1,n3 and D∈Mn2,n4. Then,
(A⊗B)(C⊗D)=(AC⊗BD),(A⊗B)=A(In1⊗B),ifm2=1,(A⊗B)=B(A⊗In2),ifm1=1. |
Definition 7.5. (Jacobian matrix) Let F be a differential map from Mn,q to Mm,p. The Jacobian matrix DF(X) of F at X is defined as the following mp×nq matrix:
DF(X)=∂vec(F(X))∂vec(X)T. | (7.1) |
Definition 7.6. Let F be a differentiable map from Mn,q to Mm,p and H be a differentiable map from Mn,q to Mp,l. Then, D(GH)=(HT⊗Im)DG+(Il⊗G)DH.
The authors gratefully acknowledge the financial supports by Young innovative talents project of general colleges and universities in Guangdong Province under Grant numbers 2018KQNCX238, as well as the Doctoral Scientific Research Foundation of Jiaying University.
The authors declared that they have no conflicts of interest to this work.
[1] |
X. R. Mao, Stability of stochastic differential equations with Markovian switching, Stoch. Proc. Appl., 79 (1999), 45-67. doi: 10.1016/S0304-4149(98)00070-2
![]() |
[2] | Y. Xu, Z. M. He, P. G. Wang, Pth monent asymptotic stability for neutral stochastic functional diferential equations with Lévy processes, Appl. Math. Comput., 269 (2015), 594-605. |
[3] |
F. Chen, M. X. Shen, W. Y. Fei, et al. Stability of highly nonlinear hybrid stochastic integrodifferential delay equations, Nonlinear Anal. Hybrid Syst., 31 (2019), 180-199. doi: 10.1016/j.nahs.2018.09.001
![]() |
[4] |
J. W. Luo, K. Liu, Stability of infinite dimensional stochastic evolution equations with memory and Markovian jumps, Stoch. Proc. Appl., 118 (2008), 864-895. doi: 10.1016/j.spa.2007.06.009
![]() |
[5] | A. V. Skorokhod, Asymptotic methods in the theory of stochastic differential equations, Providence: American Mathematical Society, 1989. |
[6] |
H. J. Wu, J. T. Sun, p-Moment stability of stochastic differential equations with impulsive jump and Markovian switching, Automatica, 42 (2006), 1753-1759. doi: 10.1016/j.automatica.2006.05.009
![]() |
[7] | E. W. Zhu, X. Tian, Y. H. Wang, On pth moment exponential stability of stochastic differential equations with Markovian switching and time-varying delay, J. Inequal. Appl., 1 (2015), 1-11. |
[8] | X. R. Mao, C. G. Yuan, Stochastic differential equations with Markovian switching, London: Imperial College Press, 2006. |
[9] |
N. T. Dieu, Some results on almost sure stability of non-Autonomous stochastic differential equations with Markovian switching, Vietnam J. Math., 44 (2016), 1-13. doi: 10.1007/s10013-016-0187-x
![]() |
[10] |
L. G. Xu, Z. L. Dai, H. X. Hu, Almost sure and moment asymptotic boundedness of stochastic delay differential systems, Appl. Math. Comput., 361 (2019), 157-168. doi: 10.1016/j.cam.2019.04.001
![]() |
[11] |
A. E. Jaber, B. Bouchard, C. Illand, Stochastic invariance of closed sets with non-Lipschitz coefficients, Stoch. Proc. Appl., 129 (2019), 1726-1748. doi: 10.1016/j.spa.2018.06.003
![]() |
[12] |
D. Cao, C. Y. Sun, M. Yang, Dynamics for a stochastic reaction-diffusion equation with additive noise, J. Differ. Equations, 259 (2015), 838-872. doi: 10.1016/j.jde.2015.02.020
![]() |
[13] |
D. Li, C. Y. Sun, Q. Q. Chang, Global attractor for degenerate damped hyperbolic equations, J. Math. Anal. Appl., 453 (2017), 1-19. doi: 10.1016/j.jmaa.2017.03.077
![]() |
[14] | A. Friedman, Stochastic differential equations and applications, New York: Academic Press, 1975. |
[15] | J. P. Aubin, G. D. Prato, Stochastic viability and invariance, Ann. Scuola. Norm-Sci., 17 (1990), 595-613. |
[16] |
Tappe, Stefan, Invariance of closed convex cones for stochastic partial differential equations, J. Math. Anal. Appl., 451 (2017), 1077-1122. doi: 10.1016/j.jmaa.2017.02.044
![]() |
[17] | I. Chueshov, M. Scheutzow, Invariance and monotonicity for stochastic delay differential equations, Discrete Cont. Dyn-B., 18 (2013), 1533-1554. |
[18] | B. Øksendal, Stochastic differential equations: An introduction with applications, 6 Eds., Bei Jing: World Publishing Corporation, 2003. |
[19] |
D. H. He, L. G. Xu, Boundedness analysis of stochastic integrodifferential systems with Lévy noise, J. Taibah Univ. Sci., 14 (2020), 87-93. doi: 10.1080/16583655.2019.1708540
![]() |
[20] | S. E. A. Mohammed, Stochastic functional differential equations, Boston: Pitman Advanced Publishing Program, 1984. |
[21] |
R. Buckdahn, M. Quincampoix, C. Rainer, Another proof for the equivalence between invariance of closed sets with respect to stochastic and deterministic systems, B. Sci. Math., 134 (2010), 207-214. doi: 10.1016/j.bulsci.2007.11.003
![]() |
[22] |
B. P. Cheridito, H. M. Soner, N. Touzi, Small time path behavior of double stochastic integrals and applications to stochastic control, Ann. Appl. Probab., 15 (2005), 2472-2495. doi: 10.1214/105051605000000557
![]() |
[23] | R. T. Rockafellar, J. B. Wets, Variational analysis, New York: Springer, 1998. |
[24] | G. T. Kurtz, Lectures on stochastic analysis, 2 Eds., Madison: University of Wisconsin-Madison, 2007. |
[25] | S. N. Ethier, T. G. Kurtz, Markov processes: Characterization and convergence, New Jersey: John Wiley and Sons, 1986. |
[26] | C. H. Li, J. W. Luo, Stochastic invariance for neutral functional differential equation with nonLipschitz coefficients, Discrete. Cont. Dyn-B., 24 (2019), 3299-3318. |
[27] | X. R. Mao, Stochastic defferential equations and application, 2 Eds., Chichester: Woodhead Publishing, 2007. |
[28] |
F. K. Wu, S. G. Hu, C. M. Huang, Robustness of general decay stability of nonlinear neutral stochastic functional differential equations with infinite delay, Syst. Control. Lett., 59 (2010), 195-202. doi: 10.1016/j.sysconle.2010.01.004
![]() |
[29] | C. G. Yuan, J. Lygeros, Stochastic markovian switching hybrid processes, Cambridge: University of Cambridge, 2004. |
[30] | L. G. Xu, S. S. Ge, H. X. Hu, Boundedness and stability analysis for impulsive stochastic differential equations driven by G-Brownian motion, Int. J. Control, 92 (2017), 1-16. |
[31] | B. Yang, Z. Zeng, L. Wang, Most probable phase portraits of stochastic differential equations and its numerical simulation, arXiv.org, 2017. Available from: https://arxiv.org/abs/1703.06789. |
[32] | J. R. Magnus, H. Neudecker, Matrix differential calculus with applications in statistics and econometrics, 3 Eds., New Jersey: Wiley, 2007. |
1. | Xuewen Tan, Pengpeng Liu, Wenhui Luo, Hui Chen, Analysis of a Class of Predation-Predation Model Dynamics with Random Perturbations, 2022, 10, 2227-7390, 3238, 10.3390/math10183238 |