This paper is devoted to the controlled drift estimation of the mixed fractional Ornstein-Uhlenbeck process. We will consider two models: one is the optimal input where we will find the controlled function which maximize the Fisher information for the unknown parameter and the other one with a constant as the controlled function. Large sample asymptotical properties of the Maximum Likelihood Estimator (MLE) is deduced using the Laplace transform computations or the Cameron-Martin formula with extra part from [
Citation: Chunhao Cai, Min Zhang. A note on inference for the mixed fractional Ornstein-Uhlenbeck process with drift[J]. AIMS Mathematics, 2021, 6(6): 6439-6453. doi: 10.3934/math.2021378
[1] | Kung-Chi Chen, Kuo-Shing Chen . Pricing green financial options under the mixed fractal Brownian motions with jump diffusion environment. AIMS Mathematics, 2024, 9(8): 21496-21523. doi: 10.3934/math.20241044 |
[2] | Huantian Xie, Nenghui Kuang . Least squares type estimations for discretely observed nonergodic Gaussian Ornstein-Uhlenbeck processes of the second kind. AIMS Mathematics, 2022, 7(1): 1095-1114. doi: 10.3934/math.2022065 |
[3] | Weiguo Liu, Yan Jiang, Zhi Li . Rate of convergence of Euler approximation of time-dependent mixed SDEs driven by Brownian motions and fractional Brownian motions. AIMS Mathematics, 2020, 5(3): 2163-2195. doi: 10.3934/math.2020144 |
[4] | Chao Wei . Parameter estimation for partially observed stochastic differential equations driven by fractional Brownian motion. AIMS Mathematics, 2022, 7(7): 12952-12961. doi: 10.3934/math.2022717 |
[5] | Abdulaziz Alsenafi, Mishari Al-Foraih, Khalifa Es-Sebaiy . Least squares estimation for non-ergodic weighted fractional Ornstein-Uhlenbeck process of general parameters. AIMS Mathematics, 2021, 6(11): 12780-12794. doi: 10.3934/math.2021738 |
[6] | Mounir Zili, Eya Zougar, Mohamed Rhaima . Fractional stochastic heat equation with mixed operator and driven by fractional-type noise. AIMS Mathematics, 2024, 9(10): 28970-29000. doi: 10.3934/math.20241406 |
[7] | Xinyi Wang, Chunyu Wang . Pricing geometric average Asian options in the mixed sub-fractional Brownian motion environment with Vasicek interest rate model. AIMS Mathematics, 2024, 9(10): 26579-26601. doi: 10.3934/math.20241293 |
[8] | Chao Yue, Chuanhe Shen . Fractal barrier option pricing under sub-mixed fractional Brownian motion with jump processes. AIMS Mathematics, 2024, 9(11): 31010-31029. doi: 10.3934/math.20241496 |
[9] | Nelson T. Jamba, Gonçalo Jacinto, Patrícia A. Filipe, Carlos A. Braumann . Estimation for stochastic differential equation mixed models using approximation methods. AIMS Mathematics, 2024, 9(4): 7866-7894. doi: 10.3934/math.2024383 |
[10] | Chunting Ji, Hui Liu, Jie Xin . Random attractors of the stochastic extended Brusselator system with a multiplicative noise. AIMS Mathematics, 2020, 5(4): 3584-3611. doi: 10.3934/math.2020233 |
This paper is devoted to the controlled drift estimation of the mixed fractional Ornstein-Uhlenbeck process. We will consider two models: one is the optimal input where we will find the controlled function which maximize the Fisher information for the unknown parameter and the other one with a constant as the controlled function. Large sample asymptotical properties of the Maximum Likelihood Estimator (MLE) is deduced using the Laplace transform computations or the Cameron-Martin formula with extra part from [
The drift parametric estimation of the Ornstein-Uhlenbeck process has been paid more and more attention in the past decades. These years, researchers not only considered the process with standard Brownian motion or Levy process but also the fractional case (see e.g., [15,17]). The MLE and its large deviation of the mixed fractional case has been studied by Chigansky et al. [12,19]. In this paper we will consider still the MLE of the drift parameter but with an extra part in some space which can maximizer the Fisher information which is called experiment design.
Let us define X=(Xt,0≤t≤T) a real-valued process, representing the observation, which is governed by:
dXt=−ϑXtdt+u(t)dt+dξt,t∈[0,T],X0=0 | (1.1) |
where ξ=(ξt,0≤t≤T) is a mixed fractional fractional Brownian motion (mfBm for short) which is defined by ξt=Wt+BHt, here W=(Wt,0≤t≤T) and BH=(BHt,0≤t≤T are independent standard Brownian motion and fractional Brownian motion with H∈(0,1),H≠1/2.
In the statistical aspect, the classical approach for experiment design consists on a two-step procedure: maximize the Fisher information under energy constraint of the input and find an adaptive estimation procedure. Ovseevich et al. [20] has first consider this type problem for the diffusion equation with continuous observation. When the kernel in [20] is not with explicit formula in the fractional diffusion case, Brouste et al. [3,4] deduce the lower bound and upper bound with the method of spectral gap and solve the same problem. Base on this method, Brouste and Cai [1] have extended the result to the partially observed fractional Ornestein-Uhlenbeck process, in this work the asymptotical normality has been demonstrated with linear filtering of Gaussian processes and Laplace transform presented in [2,14,15,16,18]. These previous work, the common point is that: the optimal input does not depend on the unknown parameter and maximum likelihood estimator can be found directly from the likelihood equation. The one-step estimator will be used following the Newton-Raphson method and this work was introduced by Cai and LV [8].
For a fixed value of parameter ϑ, let PTϑ denote the probability measure, induced by XT on the function space C[0,T] and let FXt be the nature filtration of X, FXt=σ(Xs,0≤s≤t). Let L(ϑ,XT) be the likelihood, i.e., the Radon-Nikodym derivative of PTϑ, restricted to FYT with respect to some reference measure on C[0,T]. In this setting, Fisher information stands for
IT(ϑ,u)=−Eϑ∂2∂ϑ2lnL(ϑ,XT). |
Let us denote UT some functional space of controls, that is defined by Eqs (2.7) and (2.6). Let us therefore note
JT(ϑ)=supu∈UTIT(ϑ,u). | (1.2) |
our main goal is to find estimator ¯ϑT of the parameter ϑ which is asymptotically efficient in the sense that, for any compact K∈R+∗={ϑ∈R,ϑ>0},
supϑ∈KJT(ϑ)Eϑ(¯ϑT−ϑ)2=1+o(1), | (1.3) |
as T→∞.
As the optimal input does not depend on ϑ (see Proposition 2.1), a possible candidate is the Maximum Likelihood Estimator (MLE) ˆϑT, defined as the maximizer of the likelihood:
ˆϑT=argmaxϑ>0L(ϑ,XT). |
We want to find the asymptotical normality of the MLE of ϑ.
The interest to mixed fractional Brownian motion was triggered by Cheridito[9]. The resent works of Cai, Chigansky, Kleptsyna and Marushkevych ([6,11,12,19]) present a great value for the purpose of this paper. The process ξt satisfies a number of curious properties with applications in mathematical finance, see [5]. In particular, as shown in [9,10], it is a semimartingale if and only if H∈{12}⋃(34,1] and the measure μξ induced by ξ on the space of continuous functions on [0,T], is equivalent to the standard Wiener measure μB for H>34. On the other hand, μξ and μBH are equivalent if and only if H<14.
The paper falls into five parts. In Section 2, we present some main results of this paper and the Section 3 will contribute to the proofs of the main results. Section 4 is devoted to another special constant case. Some Lemmas will be given in Appendix.
Even if the mixed fractional Brownian motion ξ is a semimartingale when H>34, it is hard to write the likelihood function directly. We will try to transform our model with the fundamental martingale in [6] and get the explicit representation of the likelihood function. In what follows, all random variables and processes are defined on a given stochastic basis (Ω,F,(Ft)t≥0,P) satisfying the usual conditions and processes are (Ft)− adapted. Moreover the natural filtration of a process is understood as the P-completion of the filtration generated by this process.
From the canonical innovation representation in [6], the fundamental martingale is defined as Mt=E(Bt|Fξt), t∈[0,T], then for H∈(0,1) and H≠1/2 this martingale satisfies
Mt=∫t0g(s,t)dξs,⟨M⟩t=∫t0g(s,t)ds | (2.1) |
where g(s,t) is the solution of the integro-differential equation
g(s,t)+Hdds∫t0g(r,t)|r−s|2H−1sign(s−r)dr=1,0<s≤t≤T | (2.2) |
Following from [6], let us introduce a process Z=(Zt,0≤t≤T) the fundamental semimartingale associated to X, defined as
Zt=∫t0g(s,t)dXs. |
Note that X can be represented as Xt=∫t0ˆg(s,t)dZs where
ˆg(s,t)=1−dd⟨M⟩s∫t0g(r,s)dr | (2.3) |
for 0≤s≤t and there for the nature filtration of X and Z coincide. Moreover, we have the following representations:
dZt=−ϑQtd⟨M⟩t+v(t)d⟨M⟩t+dMt, | (2.4) |
where
Qt=dd⟨M⟩t∫t0g(s,t)Xsds,v(t)=dd⟨M⟩t∫t0g(s,t)u(s)ds. | (2.5) |
First of all, let us define the space of control for v(t):
VT={h|1T∫T0|v(t)|2d⟨M⟩t≤1}. | (2.6) |
Remark that with (2.5) the following relationship between control u and its transformation v holds:
u(t)=ddt∫t0ˆg(t,s)v(s)d⟨M⟩s | (2.7) |
we can set the admissible control as UT={u|v∈VT}. Note that these set are non-empty.
From [12], we know Qt=∫t0ψ(s,t)dZs where
ψ(s,t)=12(dtd⟨M⟩t+dsd⟨M⟩s). | (2.8) |
Moreover, Qt=12ℓ(t)∗ζt, where ℓ(t)=(ψ(t,t)1), ∗ standing for the transposition and ζ=(ζt,t≥0) is the solution of the stochastic differential equation
dζt=−ϑ2A(t)ζtd⟨M⟩t+b(t)v(t)d⟨M⟩t+b(t)dMt,ζ0=02×1, | (2.9) |
with
A(t)=(ψ(t,t)1ψ2(t,t)ψ(t,t)),b(t)=(1ψ(t,t)). | (2.10) |
The classical Girsanov theorem gives
L(ϑ,ZT)=Eϑexp{−∫T0(−ϑQt+v(t))dZt−12∫T0(−ϑQt+v(t))2d⟨M⟩t}. | (2.11) |
Now from (2.11) the Fisher information can be easily obtained by
IT(ϑ,v)=−Eϑ∂2∂ϑ2lnL(ϑ,ZT)=14Eϑ∫T0(ℓ(t)∗ζt)2d⟨M⟩. |
Then we have the following results for the optimal input:
Theorem 2.1. The asymptotic optimal input in the class of controls UT is uopt(t)=ddt∫t0ˆg(s,t)ψ(s,s)d⟨M⟩s where ˆg(s,t), ψ(s,t), ⟨M⟩t are defined in (2.1), (2.3), (2.8). Moreover,
limT→+∞JT(ϑ)T=I(ϑ), |
where
I(ϑ)=12ϑ+1ϑ2. | (2.12) |
The JT(ϑ) is defined in (1.2).
From the Theorem 2.1, we can see that the optimal input uopt(t) does not depend on the unknown parameter ϑ, we can easily obtain the estimator error of the MLE of the ˆϑT:
ˆϑT−ϑ=∫T0QtdMt∫T0Q2td⟨M⟩t. | (2.13) |
Then, the MLE reaches efficiency and we deduce its large sample asymptotic properties:
Theorem 2.2. The MLE is uniformly consistent on compacts K⊂R+∗, i.e. for any ν>0,
limT→∞supϑ∈KPTϑ{|ˆϑT−ϑ|>ν}=0, |
uniformly on compacts asymptotically normal: as T tends to +∞,
limT→∞supϑ∈K|Eϑf(√T(ˆϑT−ϑ))−Ef(η)|=0∀f∈Cb |
and ξ is a zero mean Gaussian random variable of variance I(ϑ)−1 (see (2.12) for the explicit value) which does not depend on H and we have the uniform on ϑ∈K convergence of the moments: for any p>0,
limT→∞supϑ∈K|Eϑ|√T(ˆϑT−ϑ)|p−E|η|p|=0. |
Finally, the MLE is efficient in the sense of (1.3).
Theorem 2.3. The MLE ˆϑT is strong consistency that is
ˆϑTa.s.→ϑ,T→∞. |
We will compute the Fisher information with the same method in [1], that is to separate the Fisher information into two parts, on into the control, the other without, we focus on the following decomposition:
IT(ϑ,v)=14Eϑ{∫T0(ℓ(t)∗ζt−Eϑℓ(t)∗ζt+Eϑℓ(t)∗ζt)2}=I1,T(ϑ,v)+I2,T(ϑ,v) | (3.1) |
where
I1,T(ϑ,v)=14∫T0Eϑ(ℓ(t)∗ζt−Eϑℓ(t)∗ζt)2⟨M⟩t | (3.2) |
and
I2,T(ϑ,v)=14∫T0(ℓ(t)∗Eϑζt)2d⟨M⟩t. | (3.3) |
The deterministic function (P(t)=Eϑζt,t≥0) satisfies the following equation:
dP(t)d⟨M⟩t=−12ϑA(t)P(t)+b(t)v(t),P(0)=02×1, | (3.4) |
at the same time the process ¯P=(¯Pt=ζt−Eϑζt,t≥0) satisfies the following stochastic equation:
d¯Pt=−12ϑA(t)¯Ptd⟨M⟩t+b(t)dMt, |
which is just the ζt with v(t)=0 which can be found in [12].
With the technical separation of (3.1) and the precedent remarks, we have
JT(ϑ)=I1,T(ϑ)+J2,T(ϑ), |
where
J2,T(ϑ)=supv∈VTI2,T(ϑ,v). |
From [12], we know
limT→∞I1,T(ϑ)T=12ϑ, |
so we just need to check that limT→∞J2,T(ϑ)T=1ϑ2. From (3.4), we get
P(t)=φ(t)∫t0φ−1(s)b(s)v(s)d⟨M⟩s, | (3.5) |
where φ(t) is the matrix defined by
dφ(t)d⟨M⟩t=−ϑ2A(t)φ(t),φ(0)=Id2×2 | (3.6) |
with Id2×2 the 2×2 identity matrix. Substituting into (3.3), we get
I1,T(ϑ,v)=∫T0∫T0KT(s,σ)1√ψ(s,s)v(s)1√ψ(σ,σ)v(σ)dsdσ, | (3.7) |
where the operator
KT(s,σ)=∫Tmax(s,σ)G(t,s)G(t,σ)dt | (3.8) |
and
G(t,σ)=12(1√ψ(t,t)ℓ(t)∗φ(t)φ−1(σ)b(σ)1√ψ(σ,σ)). | (3.9) |
Then
J2,T(ϑ)=Tsup˜v∈L2[0,T],‖˜v‖≤1∫T0∫T0KT(s,σ)˜v(s)˜v(σ)dsdσ,=Tsup˜v∈L2[0,T],‖˜v‖≤1(KT˜v,˜v) | (3.10) |
where ˜v(s)=v(s)√T1√ψ(t,t) and ‖∙‖ stands for the usual norm in L2[0,T]. Thus, Lemma 5.1 completes our proof.
The proof of Theorem 2.2 is based on the Ibragimov-Khasminskii program of Theorem I.10.1 in [13]. Taking vopt(t)=√ψ(t,t) into the Eq (2.11), then the likelihood function is
L(ϑ,ZT)=Eϑexp{−∫T0(−ϑQt+vopt(t))dZt−12∫T0(−ϑQt+vopt(t))2d⟨M⟩t}, |
then the MLE will be
ˆϑT=∫T0vopt(t)Qtd⟨M⟩t−∫T0QtdZt∫T0Q2td⟨M⟩t | (3.11) |
and the estimation error has the form
ˆϑT−ϑ=−∫T0QtdMt∫T0Q2td⟨M⟩t, | (3.12) |
just take attention that here the Qt will be with the relationship with vopt(t). Because ∫t0QsdMs,0≤t≤T is a martingale and ∫t0Q2sd⟨M⟩s is its quadratic variation, In order to prove the Theorem 2.2, we only need to check the Laplace transform of the quadratic variation and Lemma 5.2 achieves the proof.
With the law of large numbers, in order to obtain the strong consistency of ϑ, we only need to prove that
limT→∞∫T0Q2td⟨M⟩t=+∞ | (3.13) |
or there exists a positive constant μ such that the limit of the Laplace transform
limT→∞Eexp(−μ∫T0Q2td⟨M⟩t)=0. |
In Lemma 5.4 if we take a big enough μ>0 such that the limit is negative (the μ can be easily found), then the Eq (3.13) is directly from this Lemma which implies the strong consistency.
In fact, the previous method of the Laplace transform is also useful for the case u(t) is a known constant. This problem has been considered in [7], here we use Cameron-Martin formula to reprove the result.
Let us consider u(t)=α a constant not 0. In this case we will denote the processes X, Z, Q by Xα, Zα and Qα and it is not hart to find that the MLE of the unknown parameter ϑ is
ˆϑαT=∫T0αQαtd⟨M⟩t−∫T0QαtdZαt∫T0(Qαt)2d⟨M⟩t | (4.1) |
where
dZαt=(α−ϑQαt)d⟨M⟩t+dMt,t∈[0,T]. | (4.2) |
From [7] the estimation error can be presented by
ˆϑαT−ϑ=∫T0QαtdMt∫T0(Qαt)2d⟨M⟩t. | (4.3) |
In order to obtain the result in [7]
√T(ˆϑαT−ϑ)d→N(0,2ϑ) |
for H>1/2 and
√T(ˆϑαT−ϑ)d→N(0,2ϑ22α2+ϑ) |
for H<1/2, we prove the stronger result of the Laplace transform:
Lemma 4.1. For H>1/2. the limit of the Laplace transform is
limT→∞LαT(μ)=limT→∞Eϑexp(−μT∫T0(Qαt)2d⟨M⟩t)=exp(−μ2ϑ),∀μ>0 |
and for H<1/2,
limT→∞LαT(μ)=limT→∞Eϑexp(−μT∫T0(Qαt)2d⟨M⟩t)=exp(−μ(12ϑ+(αϑ)2)),∀μ>0. |
The proof will be presented in the Appendix.
Remark 4.2. The strong consistency of ˆϑαT can also be obtained with the same proof of Theorem 2.3.
Remark 4.3. If we only consider this special case with u(t)=α, the Laplace transform has no advantage because we can find a very kind solution of Xα with respect to the classical O-U process and every term of the estimator error can be easily computed as presented in [7]. But from the optimal input case, even we can find the explicit solution but the components of the estimator error are complicated, so the Laplace transform will be more efficient.
Remark 4.4. We only consider the MLE of ϑ when u(t) is known, but the Laplace transform will be more useful for the case of O-U process with periodic drift of the form
dXt=(p∑i=1μiφi(t)−ϑXt)dt+dξt,X0=0 |
where μi and ϑ are all unknown and to be estimated. We will use the Cameron-Martin formula for the quadratic variation of the martingale of p+1 dimension especially when H<1/2 and this will be our future work.
Lemma 5.1. For the kernel KT(s,σ) defined in Eq (3.10)
limT→∞sup˜v∈L2[0,T],‖˜v‖≤1(KT˜v,˜v)=1ϑ2 | (5.1) |
with an optimal input vopt(t)=√ψ(t,t)
Proof. When we take v(t)=vopt(t)=√ψ(t,t), then
dP(t)d⟨M⟩t=−12ϑA(t)P(t)+b(t)vopt(t),P(0)=02×1. |
Because for H>1/2, d⟨M⟩tdt=g2(t,t). From [12]
⟨M⟩T∼T2−2Hλ−1H,T→∞,λH=2HΓ(3−2H)Γ(H+1/2)Γ(3/2−H). |
then with the calculus of [4] we can easily obtain
limT→∞14T∫T0(ℓ(t)∗P(t))2d⟨M⟩t=1ϑ2. | (5.2) |
On the other hand for H<1/2 we have
limT→∞⟨M⟩TT=1 |
and we can also easily obtain the result of (5.2), that is to say the lower bound at least will be 1ϑ2.
Now we will try to find the upper bound. Let us introduce the Gaussian process (ξt,0≤t≤T)
ξt=(1√ψ(σ,σ)ℓ(σ)∗φ(σ)⊙dWσ)φ−1(t),ξT=0 |
where (Wσ,σ≥0) is a Wiener process and ⊙ denotes the Itô backward integral (see [21]). It is worth emphasizing that
KT(s,σ)=14E(ξsb(s)1√ψ(s,s)ξσb(σ)1√ψ(σ,σ))=E(XσXs). |
where X is the centered Gaussian process defined by Xt=12ξtb(t)1√ψ(s,s). The process (ξt,0≤t≤T) satisfies the following dynamic
−dξt=−ϑ2ξtA(t)d⟨M⟩t+ℓ(t)∗1√ψ(t,t)⊙dWt,ξT=0. |
Obviously, KT(s,σ) is a compact symmetric operator for fixed T, so we should estimate the spectral gap (the first eigenvalue ν1(T)) of the operator. The estimation of the spectral gap is based on the Laplace transform computation. Let us compute, for sufficiently small negative a<0 the Laplace transform of ∫T0X2tdt:
LT(a)=Eϑexp(−a∫T0X2tdt)=Eϑexp(−a∫T0(12ξtb(t)1√ψ(t,t))2dt) |
On one hand, for a>−1ν1(T), since X is a centered Gaussian process with covariance operator KT, using Mercer's theorem and Parseval's inequality, LT(a) can be represented as :
LT(a)=∏i≥1(1+2aνi(T))−12, | (5.3) |
where νi(T),i≥1 is the sequence of positive eigenvalues of the covariance operator. On the other hand,
LT(a)=Eϑ(−a4∫T0ξtb(t)b(t)∗ξ∗td⟨M⟩t)=exp(12∫T0trace(H(t)M(t)d⟨M⟩t) |
where M(t)=ℓ(t)∗ℓ(t) and H(t) is the solution of Ricatti differential equation:
dH(t)d⟨M⟩t=H(t)A(t)∗+A(t)H+H(t)M(t)H(t)−a2b(t)b(t)∗, |
with A(t)=−ϑ2A(t) and the initial condition H(0)=02×2, provided that the solution of this equation exists for any 0≤t≤T.
It is well know that if detΨ1(t)>0, for any t∈[0,T], then H(t)=Ψ−11(t)Ψ2(t), where the pair of 2×2 matrices (Ψ1,Ψ2) satisfies the system of linear differential equations:
dΨ1(t)d⟨M⟩t=−Ψ1(t)A(t)−Ψ2(t)M(t),Ψ1(0)=Id2×2,dΨ2(t)d⟨M⟩t=−a2Ψ1(t)b(t)b(t)∗+Ψ2(t)A(t)∗,Ψ2(0)=02×2 | (5.4) |
and
LT(a)=exp(−12∫T0trace(A(t))d⟨N⟩t)(detΨ1(T))−12. | (5.5) |
Rewriting the system (5.4) in the following form
d(Ψ1(t),Ψ2(t)J)d⟨M⟩t=(Ψ1(t),Ψ2(t)J)⋅(Υ⊗A(t)), | (5.6) |
where J=(0110) and Υ=(ϑ2−a2−1−ϑ2)
When −ϑ22≤a≤0, we have two real eigenvalues of the matrix Υ, we denote them (xi)i=1,2. It can be checked that there exists a constant C>0 such that
detΨ1(T)=exp((x1)T)(C+OT→∞(1T)) |
where x1=√ϑ24+a2. Therefore, due to the (5.5), we have ∏i≥1(1+2aνi(T))>0 for any a>−ϑ22. It means that
ν1(T)≤1ϑ2 |
Lemma 5.2. For v(t)=vopt(t) defined in Lemma 5.1, the Laplace Transform
LT(μ)=Eϑexp(−μT∫T0Q2td⟨M⟩t)→T→∞exp(−μ(12ϑ+1ϑ2)) | (5.7) |
for every μ>0.
Proof. First, we replace Qt with ζt and rewrite the Laplace transform, that is
LT(μ)=Eϑexp{−μT∫T0ζtR(t)ζ∗t d⟨M⟩t} |
where ζt is defined in (2.9) and R(t)=14(ψ2(t,t)ψ(t,t)ψ(t,t)1). Following from [14], we have
LT(μ)=exp{−μT∫T0[tr(Γ(t)R(t))+Z∗(t)R(t)Z(t)]d⟨M⟩t} |
where
dΓ(t)d⟨M⟩t=−ϑ2A(t)Γ(t)−ϑ2Γ(t)A(t)∗+b(t)b(t)∗−2μTΓ(t)R(t)Γ(t) |
and
Z(t)=Eϑζt−μT∫t0φ(t)φ−1Γ(s)R(s)Z(s)d⟨M⟩s | (5.8) |
with
dφ(t)d⟨M⟩t=−ϑ2A(t)φ(t). |
From [12] we know that
limT→∞exp(−μT∫T0(tr(Γ(t)R(t)))d⟨M⟩t=exp(μ2ϑ)) |
On the other hand we know Eζt=P(t) defined in Lemma 5.1 with v(t)=vopt(t), thus
limT→∞exp(−μTEζtR(t)(Eζt)∗)=limT→∞exp(−μ4T∫T0(ℓ∗(t)P(t))2d⟨M⟩t)=exp(−μϑ2) |
Now, the conclusion is true provided that
limT→∞(μT∫t0φ(t)φ−1Γ(s)R(s)Z(s)d⟨M⟩s)R(t)(μT∫t0φ(t)φ−1Γ(s)R(s)Z(s)d⟨M⟩s)∗=0. |
On one hand, from [4] and [12] when t is large enough
∫t0|F(t,s)|ds=|μT∫t0φ(t)φ−1Γ(s)R(s)|=O(1T),T→∞ | (5.9) |
where
F(t,s)=|μTφ(t)φ−1Γ(s)R(s)| |
and |⋅| denotes L1 norm of the vector. On the other hand, If we define the operator S by
S(f)(t)=∫t0∫t0|F(t,s)|f(s)ds |
then Eq (5.8) leads to
|Z(t)|≤|P(t)|+S(|Z|)(t) |
or we can say (I−S)(|Z|)(t)≤|P(t)|≤Const. From Eq (5.9) we have for t and T large enough
|Z(t)|≤(I−S)−1(Const)(t)=∞∏n=1Sn(Const.)(t)≤Const. | (5.10) |
The Const. means some constant, but in different equation they may be different. Combining (5.9) and (5.10) we have for t large enough
∫t0|F(t,s)||Z(s)|=O(1T),T→∞ |
which achieves the proof.
In the following we will use the same method to prove the Lemma 4.1. When u(t)=α, our two dimensional observed process ζα=(ζαt,0≤t≤T) satisfies the following equation:
dζαt=αb(t)d⟨M⟩t−ϑ2A(t)ζαtd⟨M⟩t+b(t)dMt | (5.11) |
where A(t), b(t) are defined in (2.10). From the previous proof we know
LαT(μ)=exp{−μT∫T0[tr(Γ(t)R(t))+Z∗(t)R(t)Z(t)]d⟨M⟩t} | (5.12) |
where
Z(t)=Eζαt−μT∫t0φ(t)φ−1Γ(s)R(s)Z(s)d⟨M⟩s. | (5.13) |
The functions Γ(t), φ(t) and the matrix R(t) are defined in the previous Lemma. Let us recall that
EQαt=Edd⟨M⟩t∫t0g(s,t)Xαsds=dd⟨M⟩t∫t0g(s,t)EXαsds. |
When
dXαt=(α−ϑXt)dt+dξt |
we have
EXαt=αϑ−αϑe−ϑt. |
It is obvious that when we calculate the limit of 1T∫T0(EQαt)2d⟨M⟩t, the term αϑe−ϑt has no contribution and will be 0. Now
limT→∞1T∫T0(EQαt)2d⟨M⟩t=limT→∞(αϑ)21T⟨M⟩T. |
From [12], this limit will be 0 when H>1/2 and (αβ)2 when H<1/2. When
∫T0(EζαtR(t))(Eζαt)∗d⟨M⟩t=∫T0(12ℓ∗(t)EζαtR(t))2d⟨M⟩t=∫T0(EQαt)2d⟨M⟩t, |
the conclusion is true provided that
limT→∞(μT∫t0φ(t)φ−1Γ(s)R(s)Z(s)d⟨M⟩s)R(t)(μT∫t0φ(t)φ−1Γ(s)R(s)Z(s)d⟨M⟩s)∗=0 |
and this proof can be also found in the previous Lemma.
Remark 5.3. From the previous proof, we can see that the most difference compared with [12] is the extra part (Z(t) or Z(t)) of the Laplace Transform coming from the function u(t). Even in our two cases (optimal input and constant case) the extra part converges to 0 and does not have a decisive influence on the final result but we still can not ignore it. On the other hard, the limit of the main part is the sum of the uncontrolled mixed fractional O-U process and the additional part of u(t).
Lemma 5.4. For the controlled mixed fractional Ornstein-Uhlenbeck process with the drift parameter ϑ, we have the following limit:
KT(μ)=−μTlogEexp(−μ∫T0Q2td⟨M⟩t)→μϑ2+ϑ2−√ϑ24+μ2,T→∞. |
for all μ>−ϑ22.
Proof. This proof is directly from [19] and Lemma 5.2 or more specially, the term ϑ2−√ϑ24+μ2 comes from [19] and 1ϑ2 from Lemma 5.2.
In this paper, we have considered the controlled drift parametric estimation of the mixed fractional Ornstein-Uhlenbeck process
dXt=−ϑXtdt+u(t)dt+dWt+dBHt,H∈(0,1),H≠1/2. |
First of all, we have found an explicit controlled function uopt(t) which maximize the Fisher information of the unknown drift parameter ϑ. Then under this special function we use the Laplace transform to compute the asymptotical normality and strong consistency of the maximum likelihood estimator. On the other hand, we use the same Laplace transform method to analyze the MLE of ϑ when u(t) is a known constant.
Of course, when u(t) is known we can find the solution of Xt and use relations between our controlled model and the uncontrolled one to study the MLE of ϑ such as presented in [7]. But there was no doubt that the direct Laplace transform is easier to understand and manipulate.
Remark 6.1. For the simulation of the MLE, even in the mixed fractional O-U process case with u(t)=0 we do not have a proper method because the process Qt=d∫t0g(s,t)Xsdsd⟨M⟩t is hard to simulate. However, we can use the one-step MLE to achieve this goal: that is with the initial Least Square Estimator (LSE) and local asymptotical property (LAN) of the process X=(Xt,0≤t≤T). It will be our future work when the LAN property needs more tools such as the Malliavin calculus and Wick product.
Chunhao Cai is supported by the Fundamental Research Funds of Shanghai University of Finance and Economics 2020110294.
The authors declared that they have no conflicts of interest to this work.
[1] |
A. Brouste, C. Cai, Controlled drift estimation in fractional diffusion linear systems, Stoch. Dynam., 13 (2013), 1250025. doi: 10.1142/S0219493712500256
![]() |
[2] |
A. Brouste, M. Kleptsyna, Asymptotic properties of MLE for partially observed fractional diffusion system, Stat. Inference Stoch. Process., 13 (2010), 1–13. doi: 10.1007/s11203-009-9035-x
![]() |
[3] |
A. Brouste, M. Kleptsyna, A. Popier, Fractional diffusion with partial observations, Commun. Stat. Theor. M., 40 (2011), 3479–3491. doi: 10.1080/03610926.2011.581173
![]() |
[4] |
A. Brouste, M. Kleptsyna, A. Popier, Design for estimation of drift parameter in fractional diffusion system, Stat. Inference Stoch. Process., 15 (2012), 133–149. doi: 10.1007/s11203-012-9067-5
![]() |
[5] | C. Bender, T. Sottinen, E. Vlakeila, Franctional processes as models in stochastic finance, In: Advanced Mathematical Methods for Finance, Springer, Heidelberg, 2011, 75–103. |
[6] | C. Cai, P. Chigansky, M. Kleptsyna, Mixed gaussian processes: a filtering approach, Ann. Probab., 44 (2016), 3032–3075. |
[7] | C. Cai, Y. Huang, W. Xiao, Maximum Likelihood Estimation for Mixed Vasicek Processes, 2020, arXiv 2003.13351. |
[8] |
C. Cai, W. Lv, Adaptative design for estimation of parameter of second order differential equation in fractional diffusion system, Physica A, 541 (2020), 123544. doi: 10.1016/j.physa.2019.123544
![]() |
[9] | P. Cheridito, Mixed fractional Brownian motion, Bernoulli, 7 (2001), 913–934. |
[10] | P. Cheridito, Representation of Gaussian measures that are equivalent to Wiener measure, In: Séminaire de Probabilité XXXVII, Springer, Berlin, Heidelberg, 2003, 81–89. |
[11] |
P. Chigansky, M. Kleptsyna, Exact asymptotic in eigenproblems for fractional Brownian motion covariance operators, Stoch. Proc. Appl., 128 (2018), 2007–2059. doi: 10.1016/j.spa.2017.08.019
![]() |
[12] |
P. Chigansky, M. Kleptsyna, Statistical analysis of the mixed fractional Ornstein-Uhlenbeck process, Theor. Probab. Appl., 63 (2019), 408–425. doi: 10.1137/S0040585X97T989143
![]() |
[13] | I. Ibragimov, R. Khasminskii, Statistical estimation: Asymptotic theory, Springer Science & Business Media, 1981. |
[14] | M. Kleptsyna, A. Le Breton, Optimal linear filtering of general multidimensinal Gaussian processes and its application to Laplace transforms of quadratic functionals, J. Appl. Math. Stoch. Anal., 14 (2001), 215–226. |
[15] | M. Kleptsyna, A. Le Breton, Statistical Analysis of the Fractional Ornstein-Uhlenbeck type Process, Statist. Inference Stoch. Process., 5 (2002), 229–241. |
[16] | M. Kleptsyna, A. Le Breton, Extension of the Kalman-Bucy filter to elementary linear systems with fractional Brownian noises, Statist. Inference Stoch. Process., 5 (2002), 249–271. |
[17] | Y. A. Kutoyants, Statistical inference for ergodic diffusion processes, Springer-Verlag, London, 2004. |
[18] | R. Liptser, A. Shiryaev, Statistics of Random Processes, Springer-Verlag Berlin Heidelberg, New York, 2001. |
[19] |
D. Marushkevych, Large deviations for drift parameter estimator of mixed fractional Ornstein-Uhlenbeck process, Mod. Stoch. Theory Appl., 3 (2016), 107–117. doi: 10.15559/16-VMSTA54
![]() |
[20] | A. Ovseevich, R. Khasminskii, P. Chow, Adaptative design for estimation of unknown parameters in linear systems, Probl. Inform. Transm., 36 (2000), 38–68. |
[21] | B. L. Rozovsky, S. V. Lototsky, Stochastic Evolution System, Kluwer, Dordrecht, 1990. |