Loading [MathJax]/jax/output/SVG/jax.js
Research article

Variance-constrained robust H state estimation for discrete time-varying uncertain neural networks with uniform quantization

  • In this paper, we consider the robust H state estimation (SE) problem for a class of discrete time-varying uncertain neural networks (DTVUNNs) with uniform quantization and time-delay under variance constraints. In order to reflect the actual situation for the dynamic system, the constant time-delay is considered. In addition, the measurement output is first quantized by a uniform quantizer and then transmitted through a communication channel. The main purpose is to design a time-varying finite-horizon state estimator such that, for both the uniform quantization and time-delay, some sufficient criteria are obtained for the estimation error (EE) system to satisfy the error variance boundedness and the H performance constraint. With the help of stochastic analysis technique, a new H SE algorithm without resorting the augmentation method is proposed for DTVUNNs with uniform quantization. Finally, a simulation example is given to illustrate the feasibility and validity of the proposed variance-constrained robust H SE method.

    Citation: Baoyan Sun, Jun Hu, Yan Gao. Variance-constrained robust H state estimation for discrete time-varying uncertain neural networks with uniform quantization[J]. AIMS Mathematics, 2022, 7(8): 14227-14248. doi: 10.3934/math.2022784

    Related Papers:

    [1] Fawaz Aseeri, Julian Kaspczyk . The conjugacy diameters of non-abelian finite p-groups with cyclic maximal subgroups. AIMS Mathematics, 2024, 9(5): 10734-10755. doi: 10.3934/math.2024524
    [2] Huadong Su, Ling Zhu . Thickness of the subgroup intersection graph of a finite group. AIMS Mathematics, 2021, 6(3): 2590-2606. doi: 10.3934/math.2021157
    [3] Adeel Farooq, Musawwar Hussain, Muhammad Yousaf, Ahmad N. Al-Kenani . A new algorithm to compute fuzzy subgroups of a finite group. AIMS Mathematics, 2023, 8(9): 20802-20814. doi: 10.3934/math.20231060
    [4] Yan Wang, Kai Yuan, Ying Zhao . Perfect directed codes in Cayley digraphs. AIMS Mathematics, 2024, 9(9): 23878-23889. doi: 10.3934/math.20241160
    [5] Aman Ullah, Muhammad Ibrahim, Tareq Saeed . Fuzzy cosets in AG-groups. AIMS Mathematics, 2022, 7(3): 3321-3344. doi: 10.3934/math.2022185
    [6] Zhuonan Wu, Zengtai Gong . Algebraic structure of some complex intuitionistic fuzzy subgroups and their homomorphism. AIMS Mathematics, 2025, 10(2): 4067-4091. doi: 10.3934/math.2025189
    [7] Madeleine Al Tahan, Sarka Hoskova-Mayerova, B. Davvaz, A. Sonea . On subpolygroup commutativity degree of finite polygroups. AIMS Mathematics, 2023, 8(10): 23786-23799. doi: 10.3934/math.20231211
    [8] Supriya Bhunia, Ganesh Ghorai, Qin Xin . On the fuzzification of Lagrange's theorem in (α,β)-Pythagorean fuzzy environment. AIMS Mathematics, 2021, 6(9): 9290-9308. doi: 10.3934/math.2021540
    [9] Aneeza Imtiaz, Hanan Alolaiyan, Umer Shuaib, Abdul Razaq, Jia-Bao Liu . Applications of conjunctive complex fuzzy subgroups to Sylow theory. AIMS Mathematics, 2024, 9(1): 38-54. doi: 10.3934/math.2024003
    [10] Hui Chen . Characterizations of normal cancellative monoids. AIMS Mathematics, 2024, 9(1): 302-318. doi: 10.3934/math.2024018
  • In this paper, we consider the robust H state estimation (SE) problem for a class of discrete time-varying uncertain neural networks (DTVUNNs) with uniform quantization and time-delay under variance constraints. In order to reflect the actual situation for the dynamic system, the constant time-delay is considered. In addition, the measurement output is first quantized by a uniform quantizer and then transmitted through a communication channel. The main purpose is to design a time-varying finite-horizon state estimator such that, for both the uniform quantization and time-delay, some sufficient criteria are obtained for the estimation error (EE) system to satisfy the error variance boundedness and the H performance constraint. With the help of stochastic analysis technique, a new H SE algorithm without resorting the augmentation method is proposed for DTVUNNs with uniform quantization. Finally, a simulation example is given to illustrate the feasibility and validity of the proposed variance-constrained robust H SE method.



    Parameter estimation for non-ergodic type diffusion processes has been developed in several papers. For motivation and further references, we refer the reader to Basawa and Scott [4], Dietz and Kutoyants [5], Jacod [6] and Shimizu [7]. However, the statistical analysis for equations driven by fractional Brownian motion (fBm) is obviously more recent. The development of stochastic calculus with respect to the fBm allowed to study such models. In recent years, several researchers have been interested in studying statistical estimation problems for Gaussian Ornstein-Uhlenbeck processes. Estimation of the drift parameters in fractional-noise-driven Ornstein-Uhlenbeck processes is a problem that is both well-motivated by practical needs and theoretically challenging.

    In this paper, we consider the weighted fractional Brownian motion (wfBm) Ba,b:={Ba,bt,t0} with parameters (a,b) such that a>1, |b|<1 and |b|<a+1, defined as a centered Gaussian process starting from zero with covariance

    Ra,b(t,s)=E(Ba,btBa,bs)=st0ua[(tu)b+(su)b]du,s,t0. (1.1)

    For a=0, 1<b<1, the wfBm is a fBm. The process Ba,b was introduced by [8] as an extension of fBm. Moreover, it shares several properties with fBm, such as self-similarity, path continuity, behavior of increments, long-range dependence, non-semimartingale, and others. But, unlike fBm, the wfBm does not have stationary increments for a0. For more details about the subject, we refer the reader to [8].

    In this work we consider the non-ergodic Ornstein-Uhlenbeck process X:={Xt,t0} driven by a wfBm Ba,b, that is the unique solution of the following linear stochastic differential equation

    X0=0;dXt=θXtdt+dBa,bt, (1.2)

    where θ>0 is an unknown parameter.

    An example of interesting problem related to (1.2) is the statistical estimation of θ when one observes X. In recent years, several researchers have been interested in studying statistical estimation problems for Gaussian Ornstein-Uhlenbeck processes. Let us mention some works in this direction in this case of Ornstein-Uhlenbeck process driven by a fractional Brownian motion B0,b, that is, the solution of (1.2), where a=0. Using the maximum likelihood approach (see [9]), the techniques used to construct maximum likelihood estimators for the drift parameter are based on Girsanov transforms for fractional Brownian motion and depend on the properties of the deterministic fractional operators (determined by the Hurst parameter) related to the fBm. In general, the MLE is not easily computable. On the other hand, using leat squares method, in the ergodic case corresponding to θ<0, the statistical estimation for the parameter θ has been studied by several papers, for instance [10,11,12,13,14] and the references therein. Further, in the non-ergodic case corresponding to θ>0, the estimation of θ has been considered by using least squares method, for example in [15,16,17,18] and the references therein.

    Here our aim is to estimate the the drift parameter θ based on continuous-time and discrete-time observations of X, by using least squares-type estimators (LSEs) for θ.

    First we will consider the following LSE

    ˜θt=X2t2t0X2sds,t0, (1.3)

    as statistic to estimate θ based on the continuous-time observations {Xs, s[0,t]} of (1.2), as t. We will prove the strong consistency and the asymptotic behavior in distribution of the estimator ˜θt for all parameters a>1, |b|<1 and |b|<a+1. Our results extend those proved in [1,2], where 12<a<0, a<b<a+1 only.

    Further, from a practical point of view, in parametric inference, it is more realistic and interesting to consider asymptotic estimation for (1.2) based on discrete observations. So, we will assume that the process X given in (1.2) is observed equidistantly in time with the step size Δn: ti=iΔn,i=0,,n, and Tn=nΔn denotes the length of the "observation window". Then we will consider the following estimators

    ˆθn=ni=1Xti1(XtiXti1)Δnni=1X2ti1 (1.4)

    and

    ˇθn=X2Tn2Δnni=1X2ti1 (1.5)

    as statistics to estimate θ based on the sampling data Xti,i=0,,n, as Δn0 and n. We will study the asymptotic behavior and the rate consistency of the estimators ˆθn and ˇθn for all parameters a>1, |b|<1 and |b|<a+1. In this case, our results extend those proved in [3], where 1<a<0, a<b<a+1 only.

    The rest of the paper is organized as follows. In Section 2, we present auxiliary results that are used in the calculations of the paper. In Section 3, we prove the consistency and the asymptotic distribution of the estimator ˜θt given in (1.3), based on the continuous-time observations of X. In Section 3, we study the asymptotic behavior and the rate consistency of the estimators ˆθn and ˇθn defined in (1.4) and (1.5), respectively, based on the discrete-time observations of X. Our theoretical study is completed with simulations. We end the paper with a short review on some results from [15,17] needed for the proofs of our results.

    This section is devoted to prove some technical ingredients, which will be needed throughout this paper.

    In the following lemma we provide a useful decomposition of the covariance function Ra,b(t,s) of Ba,b.

    Lemma 2.1. Suppose that a>1, |b|<1 and |b|<a+1. Then we can rewrite the covariance Ra,b(t,s) of Ba,b, given in (2.1) as

    Ra,b(t,s)=β(a+1,b+1)[ta+b+1+sa+b+1]m(t,s), (2.1)

    where β(c,d)=10xc1(1x)d1 denotes the usual beta function, and the function m(t,s) is defined by

    m(t,s):=ststua(tsu)bdu. (2.2)

    Proof. We have for every s,t0,

    Ra,b(t,s) (2.3)
    =E(Ba,btBa,bs)=st0ua[(tu)b+(su)b]du=st0ua[(tsu)b+(tsu)b]du=st0ua(tsu)bdu+st0ua(tsu)bdu=st0ua(tsu)bduststua(tsu)bdu+st0ua(tsu)bdu. (2.4)

    Further, making change of variables x=u/t, we have for every t0,

    t0ua(tu)bdu=tbt0ua(1ut)bdu=ta+b+110xa(1x)bdu=ta+b+1β(a+1,b+1). (2.5)

    Therefore, combining (2.4) and (2.5), we deduce that

    Ra,b(t,s)=β(a+1,b+1)[(ts)a+b+1+(ts)a+b+1]ststua(tsu)bdu=β(a+1,b+1)[ta+b+1+sa+b+1]ststua(tsu)bdu, (2.6)

    which proves (2.1).

    We will also need the following technical lemma.

    Lemma 2.2. We have as t,

    It:=taeθtt0eθsm(t,s)dsΓ(b+1)θb+2, (2.7)
    Jt:=tae2θtt0t0eθseθrm(s,r)drdsΓ(b+1)θb+3, (2.8)

    where Γ(.) is the standard gamma function, whereas the function m(t,s) is defined in (2.2).

    Proof. We first prove (2.7). We have,

    taeθtt0eθsm(t,s)ds=taeθtt0eθstsua(tu)bduds=taeθtt0duua(tu)bu0dseθs=taeθtt0duua(tu)b(eθu1)θ=taeθtθt0ua(tu)beθudutaeθtθt0ua(tu)bdu.

    On the other hand, by the change of variables x=tu, we get

    taeθtθt0ua(tu)beθudu=taθt0(tx)axbeθxdx=1θt0(1xt)axbeθxdx1θ0xbeθxdx=Γ(b+1)θb+2

    as t. Moreover, by the change of variables x=u/t,

    taeθtθt0ua(tu)bdu=eθtθtbt0(u/t)a(1ut)bdx=eθtθtb+1t0xa(1x)bdx=eθtθtb+1β(a+1,b+1)0

    as t. Thus the proof of the convergence (2.7) is done.

    For (2.8), using L'Hôpital's rule, we obtain

    limttae2θtt0t0eθseθrm(s,r)drds=limt2t0s0eθseθrm(s,r)drdstae2θt=limt2t0eθteθrm(t,r)drtae2θt(2θ+at)=limt2(2θ+at)taeθtt0eθrm(t,r)dr=Γ(b+1)θb+3,

    where the latter equality comes from (2.7). Therefore the convergence (2.8) is proved.

    In this section we will establish the consistency and the asymptotic distribution of the least square-type estimator ˜θt given in (1.3), based on the continuous-time observation {Xs, s[0,t]} given by (1.2), as t.

    Recall that if XN(m1,σ1) and YN(m2,σ2) are two independent random variables, then X/Y follows a Cauchy-type distribution. For a motivation and further references, we refer the reader to [19], as well as [20]. Notice also that if NN(0,1) is independent of Ba,b, then N is independent of Z, since Z:=0eθsBa,bsds is a functional of Ba,b.

    Theorem 3.1. Assume that a>1, |b|<1, |b|<a+1, and let ~θt be the estimator given in (1.3). Then, as t,

    ˜θtθalmostsurely.

    Moreover, as t,

    ta/2eθt(˜θtθ)law2σBa,bE(Z2)C(1),

    where σBa,b=Γ(b+1)θb+1, Z:=0eθsBa,bsds, whereas C(1) is the standard Cauchy distribution with the probability density function 1π(1+x2); xR.

    Proof. In order to prove this Theorem 3.1, using Theorem 6.1, it suffices to check that the assumptions (H1), (H2), (H3), (H4) hold.

    Using (2.1) and the change of variables x=(tu)/(ts), we get, for every 0<st,

    E(Ba,btBa,bs)2=2tsua(tu)bdu=2(ts)b+110[t(1x)+sx]axbdx=2ta(ts)b+110[(1x)+stx]axbdx=:Ia,b.

    Further, using the fact that x(1x)+(s/t)x is continuous and doesn't vanish on [0,1], there exists constant Ca depending only on a such that

    Ia,b2Cata(ts)b+110xbdx=2Cata(ts)b+11b+1.

    This implies

    E(Ba,btBa,bs)22Cab+1ta(ts)b+1.

    Furthermore, if 1<a<0, we have ta(ts)b+1(ts)a+b+1=|ts|(a+b+1)(b+1), and if a0, we have ta(ts)b+1Ta(ts)b+1=Ta|ts|(a+b+1)(b+1).

    Consequently, for any fixed T, there exists a constant Ca,b(T) depending only on a,b,T such that, for every 0<stT,

    E(Ba,btBa,bs)2Ca,b(T)|ts|(a+b+1)(b+1),

    Therefore, using the fact that Ba,b is Gaussian, and Kolmogorov's continuity criterion, we deduce that Ba,b has a version with ((a+b+1)(b+1)ε)-Hölder continuous paths for every ε(0,(a+b+1)(b+1)). Thus (H1) holds for any δ in (0,(a+b+1)(b+1)).

    On the other hand, according to (2.1) we have for every t0,

    E(Ba,bt)2=2β(1+a,1+b)ta+b+1,

    which proves that (H2) holds for γ=(a+b+1)/2.

    Now it remains to check that the assumptions (H3) and (H4) hold for ν=a/2 and σBa,b=Γ(b+1)θb+1. Let us first compute the limiting variance of ta/2eθtt0eθsdBa,bs as t. By (2.1) we obtain

    E[(ta/2eθtt0eθsdBa,bs)2]=E[(ta/2eθt(eθtBa,btθt0eθsBa,bsds))2]=ta(Ra,b(t,t)2θeθtt0eθsRa,b(t,s)ds+θ2e2θtt0t0eθseθrRa,b(s,r)dsdr)=taΔgBa,b(t)+2θItθ2Jt, (3.1)

    where It, Jt and ΔgBa,b(t) are defined in (2.7), (2.8) and Lemma 6.1, respectively, whereas gBa,b(s,r)=β(a+1,b+1)(sa+b+1+ra+b+1).

    On the other hand, since gBa,bs(s,0)=β(a+1,b+1)(a+b+1)sa+b, and 2gBa,bsr(s,r)=0, it follows from (6.2) that

    taΔgBa,b(t)=2β(a+1,b+1)(a+b+1)tae2θtt0sa+beθsds2β(a+1,b+1)eθtta+b+10ast. (3.2)

    Combining (3.1), (3.2), (2.7) and (2.8), we get

    E[(ta/2eθtt0eθsdBa,bs)2]Γ(b+1)θb+1ast,

    which implies that (H3) holds.

    Hence, to finish the proof it remains to check that (H4) holds, that is, for all fixed s0

    limtE(Ba,bsta/2eθtt0eθrdBa,br)=0.

    Let us consider s<t. According to (6.4), we can write

    E(Ba,bsta/2eθtt0eθrdBa,br)=ta/2(Ra,b(s,t)θeθtt0eθrRa,b(s,r)dr)=ta/2(Ra,b(s,t)θeθttseθrRa,b(s,r)drθeθts0eθrRa,b(s,r)dr)=ta/2(eθ(ts)Ra,b(s,s)+eθttseθrRa,br(s,r)drθeθts0eθrRa,b(s,r)dr).

    It is clear that ta/2(eθ(ts)Ra,b(s,s)θeθts0eθrRa,b(s,r)dr)0 as t. Let us now prove that

    ta/2eθttseθrRa,br(s,r)dr0

    as t. Using (1.1) we have for s<r

    Ra,br(s,r)=bs0ua(ru)b1du

    Applying L'Hôspital's rule we obtain

    limtta/2eθttseθrRa,br(s,r)dr=limtbta/2θ+a2ts0ua(tu)b1du=limtbtb1a2θ+a2ts0ua(1u/t)b1du0ast,

    due to b1a2<0. In fact, if 1<a<0, we use b<a+1, then b<a+1<a2+1. Otherwise, if a>0, we use b<1, then b1a2<b1<0. Therefore the proof of Theorem 3.1 is complete.

    In this section, our purpose is to study the asymptotic behavior and the rate consistency of the estimators ˆθn and ˇθn based on the sampling data Xti,i=0,,n of (1.2), where ti=iΔn,i=0,,n, and Tn=nΔn denotes the length of the "observation window".

    Definition 4.1. Let {Zn} be a sequence of random variables defined on a probability space (Ω,F,P). We say {Zn} is tight (or bounded in probability), if for every ε>0, there exists Mε>0 such that,

    P(|Zn|>Mε)<ε,foralln.

    Theorem 4.1. Assume that a>1, |b|<1, |b|<a+1. Let ˆθn and ˇθn be the estimators given in (1.4) and (1.5), respectively. Suppose that Δn0 and nΔ1+αn for some α>0. Then, as n,

    ˆθnθ,ˇθnθalmostsurely,

    and for any q0,

    ΔqneθTn(ˆθnθ)andΔqneθTn(ˇθnθ)arenottight.

    In addition, if we assume that nΔ3n0 as n, the estimators ˆθn and ˇθn are Tnconsistent in the sense that the sequences

    Tn(ˆθnθ)andTn(ˇθnθ)aretight.

    Proof. In order to prove this Theorem 4.1, using Theorem 6.2, it suffices to check that the assumptions (H1), (H2), (H5) hold.

    From the proof of Theorem 3.1, the assumptions (H1), (H2) hold. Now it remains to check that (H5) holds. In this case, the process ζ is defined as

    ζt:=t0eθsdBa,bs,t0,

    whereas the integral is interpreted in the Young sense (see Appendix).

    Using the formula (6.4) and (6.3), we can write

    E[(ζtiζti1)2]=E[(titi1eθsdBa,bs)2]=E[(eθtiBa,btieθti1Ba,bti1+θtiti1eθsBa,bsds)2]=λgBa,b(ti,ti1)λm(ti,ti1)=titi1titi1eθ(r+u)2gBa,bru(r,u)drduλm(ti,ti1)=λm(ti,ti1),

    where λ.(ti,ti1) is defined in Lemma 6.2, gBa,b(s,r)=β(a+1,b+1)(sa+b+1+ra+b+1) and 2gBa,bsr(s,r)=0, whereas the term λm(ti,ti1) is equal to

    λm(ti,ti1)=2m(ti,ti1)e2θ(ti1+ti)+2θeθtititi1m(r,ti)eθrdr2θeθti1titi1m(r,ti1)eθrdr+θ2titi1titi1m(r,u)eθ(r+u)drdu.

    Combining this with the fact for every ti1urti, i2,

    |m(r,u)|=|ruxa(rx)bdx|{|raru(rx)bdx|if1<a<0|uaru(rx)bdx|ifa>0{Δa+b+1nb+1if1<a0(nΔn)aΔb+1nb+1ifa>0

    together with Δn0, we deduce that there is a positive constant C such that

    E[(ζtiζti1)2]C{Δa+b+1nb+1if1<a0(nΔn)aΔb+1nb+1ifa>0,

    which proves that the assumption (H5) holds. Therefore the desired result is obtained.

    For sample size n=2500, we simulate 100 sample paths of the process X, given by (1.2), using software R. The Tables 18 below report the mean average values, the median values and the standard deviation values of the proposed estimators ˆθn and ˇθn defined, respectively, by (1.4) and (1.5) of the true value of the parameter θ. The results of the tables below show that the drift estimators ˆθn and ˇθn perform well for different arbitrary values of a and b and they are strongly consistent, namely their values are close to the true values of the drift parameter θ.

    Table 1.  The means, median and deviation values for ˜θn, with a=0.5 and b=0.9.
    θ=0.5 θ=0.9 θ=2.5 θ=7 θ=10
    Mean 1.838178 2.039271 3.019508 7.027331 10.01776
    Median 1.744118 1.906163 3.07125 7.02379 10.02108
    Std. dev. 1.211776 1.007366 0.821718 0.1717033 0.0257764

     | Show Table
    DownLoad: CSV
    Table 2.  The means, median and deviation values for ˜θn, with a=0.1 and b=0.4.
    θ=0.5 θ=0.9 θ=2.5 θ=7 θ=10
    Mean 1.259471 1.481005 2.39942 7.01006 10.02068
    Median 1.170947 1.437582 2.552394 7.014911 10.01972
    Std. dev. 0.9584086 0.8501618 1.004956 0.08241352 0.01018074

     | Show Table
    DownLoad: CSV
    Table 3.  The means, median and deviation values for ˆθn, with a=0.5 and b=0.9.
    θ=0.5 θ=0.9 θ=2.5 θ=7 θ=10
    Mean 1.829429 2.03153 3.014143 7.01739 9.997761
    Median 1.74294 1.896568 3.069177 7.013946 10.00107
    Std. dev. 1.200806 1.008749 0.8246725 0.1711237 0.02569278

     | Show Table
    DownLoad: CSV
    Table 4.  The means, median and deviation values for ˆθn, with a=0.1 and b=0.4.
    θ=0.5 θ=0.9 θ=2.5 θ=7 θ=10
    Mean 1.098053 1.388654 2.326678 6.998938 10.00066
    Median 1.144919 1.402191 2.53695 7.005075 9.999712
    Std. dev. 0.9961072 0.8888663 1.077326 0.08785867 0.01012159

     | Show Table
    DownLoad: CSV
    Table 5.  The means, median and deviation values for ˜θn, with a=10 and b=0.7.
    θ=0.5 θ=0.9 θ=2.5 θ=7 θ=10
    Mean 6.284526 5.944562 7.676024 7.855939 9.586733
    Median 5.620266 5.235174 6.698286 8.074982 9.919933
    Std. dev. 4.851366 4.358735 6.545987 4.19334 2.664122

     | Show Table
    DownLoad: CSV
    Table 6.  The means, median and deviation values for ˜θn, with a=5 and b=0.9.
    θ=0.5 θ=0.9 θ=2.5 θ=7 θ=10
    Mean 4.363369 3.81052 4.457953 6.959573 9.817441
    Median 3.456968 3.650188 4.350376 7.172276 10.02188
    Std. dev. 3.492142 2.865577 2.699623 1.759205 1.191188

     | Show Table
    DownLoad: CSV
    Table 7.  The means, median and deviation values for ˆθn, with a=10 and b=0.7.
    θ=0.5 θ=0.9 θ=2.5 θ=7 θ=10
    Mean 5.882283 5.609962 7.379757 7.729786 9.526597
    Median 5.337299 5.041629 6.502311 8.05469 9.900039
    Std. dev. 4.882747 4.38566 6.319027 4.253017 2.698502

     | Show Table
    DownLoad: CSV
    Table 8.  The means, median and deviation values for ˆθn, with a=5 and b=0.9.
    θ=0.5 θ=0.9 θ=2.5 θ=7 θ=10
    Mean 4.328267 3.782118 4.440072 6.946132 9.797283
    Median 3.451499 3.590002 4.344927 7.161959 10.00186
    Std. dev. 3.46757 2.86794 2.688184 1.758228 1.190694

     | Show Table
    DownLoad: CSV

    To conclude, in this paper we provide least squares-type estimators for the drift parameter θ of the weighted fractional Ornstein-Uhlenbeck process X, given by (1.2), based continuous-time and discrete-time observations of X. The novelty of our approach is that it allows, comparing with the literature on statistical inference for X discussed in [1,2,3], to consider the general case a>1, |b|<1 and |b|<a+1. More precisely,

    ● We estimate the drift parameter θ of (1.2) based on the continuous-time observations {Xs, s[0,t]}, as t. We prove the strong consistency and the asymptotic behavior in distribution of the estimator ˜θt for all parameters a>1, |b|<1 and |b|<a+1. Our results extend those proved in [1,2], where 12<a<0, a<b<a+1 only.

    ● Suppose that the process X given in (1.2) is observed equidistantly in time with the step size Δn: ti=iΔn,i=0,,n. We estimate the drift parameter θ of (1.2) on the sampling data Xti,i=0,,n, as Δn0 and n. We study the asymptotic behavior and the rate consistency of the estimators ˆθn and ˇθn for all parameters a>1, |b|<1 and |b|<a+1. In this case, our results extend those proved in [3], where 1<a<0, a<b<a+1 only.

    The proofs of the asymptotic behavior of the estimators are based on a new decomposition of the covariance function Ra,b(t,s) of the wfBm Ba,b (see Lemma 2.1), and slight extensions of results [15] and [17] (see Theorem 6.1 and Theorem 6.2 in Appendix).

    Here we present some ingredients needed in the paper.

    Let G=(Gt,t0) be a continuous centered Gaussian process defined on some probability space (Ω,F,P) (Here, and throughout the text, we assume that F is the sigma-field generated by G). In this section we consider the non-ergodic case of Gaussian Ornstein-Uhlenbeck processes X={Xt,t0} given by the following linear stochastic differential equation

    X0=0;dXt=θXtdt+dGt,t0, (6.1)

    where θ>0 is an unknown parameter. It is clear that the linear equation (6.1) has the following explicit solution

    Xt=eθtζt,t0,

    where

    ζt:=t0eθsdGs,t0,

    whereas this latter integral is interpreted in the Young sense.

    Let us introduce the following required assumptions.

    (H1) The process G has Hölder continuous paths of some order δ(0,1].

    (H2) For every t0, E(G2t)ct2γ for some positive constants c and γ.

    (H3) There is constant ν in R such that the limiting variance of tνeθtt0eθsdGs exists as t, that is, there exists a constant σG>0 such that

    limtE[(tνeθtt0eθsdGs)2]=σ2G.

    (H4) For ν given in (H3), we have all fixed s0

    limtE(Gstνeθtt0eθrdGr)=0.

    (H5)There exist positive constants ρ,C and a real constant μ such that

    E[(ζtiζti1)2]C(nΔn)μΔρne2θti foreveryi=1,,n,n1.

    The following theorem is a slight extension of the main result in [15], and it can be established following the same arguments as in [15].

    Theorem 6.1. Assume that (H1) and (H2) hold and let ~θt be the estimator of the form (1.3). Then, as t,

    ˜θtθalmostsurely.

    Moreover, if (H1)(H4) hold, then, as t,

    tνeθt(˜θtθ)law2σGE(Z2)C(1),

    where Z:=0eθsGsds, whereas C(1) is the standard Cauchy distribution with the probability density function 1π(1+x2); xR.

    The following theorem is also a slight extension of the main result in [17], and it can be proved following line by line the proofs given in [17].

    Theorem 6.2. Assume that (H1), (H2) and (H5) hold. Let ˆθn and ˇθn be the estimators of the forms (1.4) and (1.5), respectively. Suppose that Δn0 and nΔ1+αn for some α>0. Then, as n,

    ˆθnθ,ˇθnθalmostsurely,

    and for any q0,

    ΔqneθTn(ˆθnθ)andΔqneθTn(ˇθnθ)arenottight.

    In addition, if we assume that nΔ3n0 as n, the estimators ˆθn and ˇθn are Tnconsistent in the sense that the sequences

    Tn(ˆθnθ)andTn(ˇθnθ)aretight.

    Lemma 6.1 ([15]). Let g:[0,)×[0,)R be a symmetric function such that gs(s,r) and 2gsr(s,r) integrable on (0,)×[0,). Then, for every t0,

    Δg(t):=g(t,t)2θeθtt0g(s,t)eθsds+θ2e2θtt0t0g(s,r)eθ(s+r)drds=2e2θtt0eθsgs(s,0)ds+2e2θtt0dseθss0dr2gsr(s,r)eθr. (6.2)

    Lemma 6.2 ([17]). Let g:[0,)×[0,)R be a symmetric function such that gs(s,r) and 2gsr(s,r) integrable on (0,)×[0,). Then, for every ts0,

    λg(t,s):=g(t,t)e2θt+g(s,s)e2θs2g(s,t)e2θ(s+t)+2θeθttsg(r,t)eθrdr2θeθstsg(r,s)eθrdr+θ2tstsg(r,u)eθ(r+u)drdu=tstseθ(r+u)2gru(r,u)drdu. (6.3)

    Let us now recall the Young integral introduced in [21]. For any α(0,1], we denote by Hα([0,T]) the set of α-Hölder continuous functions, that is, the set of functions f:[0,T]R such that

    |f|α:=sup0s<tT|f(t)f(s)|(ts)α<.

    We also set |f|=supt[0,T]|f(t)|, and we equip Hα([0,T]) with the norm fα:=|f|α+|f|.

    Let fHα([0,T]), and consider the operator Tf:C1([0,T])C0([0,T]) defined as

    Tf(g)(t)=t0f(u)g(u)du,t[0,T].

    It can be shown (see, e.g., [22]HY__HY, Section 3.1]) that, for any β(1α,1), there exists a constant Cα,β,T>0 depending only on α, β and T such that, for any gHβ([0,T]),

    0f(u)g(u)duβCα,β,Tfαgβ.

    We deduce that, for any α(0,1), any fHα([0,T]) and any β(1α,1), the linear operator Tf:C1([0,T])Hβ([0,T])Hβ([0,T]), defined as Tf(g)=0f(u)g(u)du, is continuous with respect to the norm β. By density, it extends (in an unique way) to an operator defined on Hβ. As consequence, if fHα([0,T]), if gHβ([0,T]) and if α+β>1, then the (so-called) Young integral 0f(u)dg(u) is well-defined as being Tf(g) (see [21]).

    The Young integral obeys the following formula. Let fHα([0,T]) with α(0,1) and gHβ([0,T]) with β(0,1) such that α+β>1. Then .0gudfu and .0fudgu are well-defined as the Young integrals. Moreover, for all t[0,T],

    ftgt=f0g0+t0gudfu+t0fudgu. (6.4)

    All authors declare that there is no conflict of interest in this paper.



    [1] V. A. Demin, D. V. Nekhaev, I. A. Surazhevsky, K. E. Nikiruy, A. V. Emelyanov, S. N. Nikolaev, et al., Necessary conditions for STDP-based pattern recognition learning in a memristive spiking neural network, Neural Netw., 134 (2021), 64–75. https://doi.org/10.1016/j.neunet.2020.11.005 doi: 10.1016/j.neunet.2020.11.005
    [2] N. Garcia-Pedrajas, D. Ortiz-Boyer, C. Hervas-Martinez, An alternative approach for neural network evolution with a genetic algorithm: crossover by combinatorial optimization, Neural Netw., 19 (2006), 514–528. https://doi.org/10.1016/j.neunet.2005.08.014 doi: 10.1016/j.neunet.2005.08.014
    [3] D. Maximov, V. I. Goncharenko, Y. S. Legovich, Multi-valued neural networks I: A multi-valued associative memory, Neural Comput. Appl., 33 (2021), 10189–10198. https://doi.org/10.1007/s00521-021-05781-6 doi: 10.1007/s00521-021-05781-6
    [4] Y. Liu, Z. Wang, X. Liu, State estimation for discrete-time Markovian jumping neural networks with mixed mode-dependent delays, Phys. Lett. A, 372 (2008), 7147–7155. https://doi.org/10.1016/j.physleta.2008.10.045 doi: 10.1016/j.physleta.2008.10.045
    [5] R. Sasirekha, R. Rakkiyappan, J. Cao, Y. Wan, H state estimation of discrete-time markov jump neural networks with general transition probabilities and output quantization, J. Differ. Equ. Appl., 23 (2017), 1824–1852. https://doi.org/10.1080/10236198.2017.1368501 doi: 10.1080/10236198.2017.1368501
    [6] R. Sakthivel, R. Anbuvithya, K. Mathiyalagan, P. Prakash, Combined H and passivity state estimation of memristive neural networks with random gain fluctuations, Neurocomputing, 168 (2015), 1111–1120. https://doi.org/10.1016/j.neucom.2015.05.012 doi: 10.1016/j.neucom.2015.05.012
    [7] Y. Gao, J. Hu, D. Chen, J. Du, Variance-constrained resilient H state estimation for time-varying neural networks with randomly varying nonlinearities and missing measurements, Adv. Differ. Equ., 2019 (2019). https://doi.org/10.1186/s13662-019-2298-7
    [8] Y. Liu, B. Shen, H. Shu, Finite-time resilient H state estimation for discrete-time delayed neural networks under dynamic event-triggered mechanism, Neural Netw., 121 (2020), 356–365. https://doi.org/10.1016/j.neunet.2019.09.006 doi: 10.1016/j.neunet.2019.09.006
    [9] Z. Wang, Y. Liu, X. Liu, Y. Shi, Robust state estimation for discrete-time stochastic neural networks with probabilistic measurement delays, Neurocomputing, 74 (2010), 256–264. https://doi.org/10.1016/j.neucom.2010.03.013 doi: 10.1016/j.neucom.2010.03.013
    [10] J. Hu, C. Jia, H. Yu, H. Liu, Dynamic event-triggered state estimation for nonlinear coupled output complex networks subject to innovation constraints, IEEE-CAA J. Automatica Sin., 9 (2022), 941–944. Doi: 10.1109/JAS.2022.105581 doi: 10.1109/JAS.2022.105581
    [11] L. Liu, L. Ma, J. Zhang, Y. Bo, Distributed non-fragile set-membership filtering for nonlinear systems under fading channels and bias injection attacks, Int. J. Syst. Sci., 52 (2021), 1192–1205. https://doi.org/10.1080/00207721.2021.1872118 doi: 10.1080/00207721.2021.1872118
    [12] J. Mao, Y. Sun, X. Yi, H. Liu, D. Ding, Recursive filtering of networked nonlinear systems: A survey, Int. J. Syst. Sci., 52 (2021), 1110–1128. https://doi.org/10.1093/bjsw/bcab096 doi: 10.1093/bjsw/bcab096
    [13] J. Hu, C. Jia, H. Liu, X. Yi, Y. Liu, A survey on state estimation of complex dynamical networks, Int. J. Syst. Sci., 52 (2021), 3351–3367. https://doi.org/10.1080/00207721.2021.1995528 doi: 10.1080/00207721.2021.1995528
    [14] S. Shi, Z. Fei, T. Wang, Y. Xu, Filtering for switched T-S fuzzy systems with persistent dwell time, IEEE T. Cybern., 49 (2019), 1923–1931. https://doi.org/10.1109/TCYB.2018.2816982 doi: 10.1109/TCYB.2018.2816982
    [15] Y. Li, M. Yuan, M. Chadli, Z. Wang, D. Zhao, Unknown input functional observer design for discrete time interval type-2 Takagi-Sugeno fuzzy systems, IEEE Trans. Fuzzy Systems, (2022). https://doi.org/10.1109/TFUZZ.2022.3156735
    [16] Y. Wu, Y. Guo, M. Toyoda, Policy iteration approach to the infinite horizon average optimal control of probabilistic Boolean networks, IEEE Trans. Neural Netw. Learn. Syst., 32 (2021), 2910–2924. https://doi.org/10.1109/TNNLS.2020.3008960 doi: 10.1109/TNNLS.2020.3008960
    [17] J. Hu, H. Zhang, H. Liu, X. Yu, A survey on sliding mode control for networked control systems, Int. J. Syst. Sci., 52 (2021), 1129–1147. https://doi.org/10.1080/00207721.2021.1885082 doi: 10.1080/00207721.2021.1885082
    [18] K. Zhu, J. Hu, Y. Liu, N. D. Alotaibi, F. E. Alsaadi, On 2- output-feedback control scheduled by stochastic communication protocol for two-dimensional switched systems, Int. J. Syst. Sci., 52 (2021), 2961–2976. https://doi.org/10.1080/00207721.2021.1914768 doi: 10.1080/00207721.2021.1914768
    [19] L. Zou, Z. Wang, J. Hu, Y. Liu, X. Liu, Communication-protocol-based analysis and synthesis of networked systems: progress, prospects and challenges, Int. J. Syst. Sci., 52 (2021), 3013–3034. https://doi.org/10.1080/00207721.2021.1917721 doi: 10.1080/00207721.2021.1917721
    [20] Z. H. Pang, C. B. Zheng, C. Li, G. P. Liu, Q. L. Han, Cloud-based time-varying formation predictive control of multi-agent systems with random communication constraints and quantized signals, IEEE Trans. Circuits Syst. II-Express Briefs, 69 (2022), 1282–1286. https://doi.org/10.1109/TCSII.2021.3106694 doi: 10.1109/TCSII.2021.3106694
    [21] Y. A. Wang, B. Shen, L. Zou, Recursive fault estimation with energy harvesting sensors and uniform quantization effects, IEEE-CAA J. Automatica Sin., 9 (2022), 926–929. Doi: 10.1109/JAS.2022.105572 doi: 10.1109/JAS.2022.105572
    [22] J. Cheng, Y. Wang, J. H. Park, J. Cao, K. Shi, Static output feedback quantized control for fuzzy Markovian switching singularly perturbed systems with deception attacks, IEEE Trans. Fuzzy Syst., 30 (2022), 1036–1047. https://doi.org/10.1109/TFUZZ.2021.3052104 doi: 10.1109/TFUZZ.2021.3052104
    [23] R. Rakkiyappan, K. Maheswari, G. Velmurugan, J. H. Park, Event-triggered H state estimation for semi-Markov jumping discrete-time neural networks with quantization, Neural Netw., 105 (2018), 236–248. Doi: 10.1016/j.neunet.2018.05.007 doi: 10.1016/j.neunet.2018.05.007
    [24] R. Sasirekha, R. Rakkiyappan, J. Cao, Y. Wan, H state estimation of discrete-time Markov jump neural networks with general transition probabilities and output quantization, J. Differ. Equ. Appl., 23 (2017), 1824–1852. https://doi.org/10.1080/10236198.2017.1368501 doi: 10.1080/10236198.2017.1368501
    [25] H. Wang, R. Dong, A. Xue, Y. Peng, Event-triggered L2-L state estimation for discrete-time neural networks with sensor saturations and data quantization, J. Frankl. Inst.-Eng. Appl. Math., 356 (2019), 10216–10240. https://doi.org/10.1016/j.jfranklin.2018.01.038 doi: 10.1016/j.jfranklin.2018.01.038
    [26] J. Zhang, Z. Wang, X. Liu, H state estimation for discrete-time delayed neural networks with randomly occurring quantizations and missing measurements, Neurocomputing, 148 (2015), 388–396. https://doi.org/10.1016/j.neucom.2014.06.017 doi: 10.1016/j.neucom.2014.06.017
    [27] W. Zhang, S. Yang, C. Li, W. Zhang, X. Yang, Stochastic exponential synchronization of memristive neural networks with time-varying delays via quantized control, Neural Netw., 104 (2018), 93–103. https://doi.org/10.1016/j.neunet.2018.04.010 doi: 10.1016/j.neunet.2018.04.010
    [28] M. Luo, S. Zhong, R. Wang, W. Kang, Robust stability analysis for discrete-time stochastic neural networks systems with time-varying delays, Appl. Math. Comput., 209 (2009), 305–313. https://doi.org/10.1016/j.amc.2008.12.084 doi: 10.1016/j.amc.2008.12.084
    [29] Q. Zhu, T. Huang, H control of stochastic networked control systems with time-varying delays: the event-triggered sampling case, Int. J. Robust Nonlinear Control, 31 (2021), 9767–9781. https://doi.org/10.1002/rnc.5798 doi: 10.1002/rnc.5798
    [30] Q. Li, B. Shen, Z. Wang, T. Huang, J. Luo, Synchronization control for a class of discrete time-delay complex dynamical networks: a dynamic event-triggered approach, IEEE T. Cybern., 49 (2019), 1979–1986. https://doi.org/10.1109/TCYB.2018.2818941 doi: 10.1109/TCYB.2018.2818941
    [31] Z. Pang, W. C. Luo, G. P Liu, Q. L. Han, Observer-based incremental predictive control of networked multi-agent systems with random delays and packet dropouts, IEEE Trans. Circuits Syst. II-Express Briefs, 68 (2021), 426–430. https://doi.org/10.1109/TCSII.2020.2999126 doi: 10.1109/TCSII.2020.2999126
    [32] L. Ma, Z. Wang, Y. Liu, F. E. Alsaadi, Distributed filtering for nonlinear time-delay systems over sensor networks subject to multiplicative link noises and switching topology, Int. J. Robust Nonlinear Control, 29 (2019), 2941–2959. https://doi.org/10.1002/rnc.4535 doi: 10.1002/rnc.4535
    [33] Y. Liu, Z. Wang, X. Liu, State estimation for discrete-time Markovian jumping neural networks with mixed mode-dependent delays, Phys. Lett. A, 372 (2008), 7147–7155. https://doi.org/10.1016/j.physleta.2008.10.045 doi: 10.1016/j.physleta.2008.10.045
    [34] M. Hua, H. Tan, J. Fei, State estimation for uncertain discrete-time stochastic neural networks with Markovian jump parameters and time-varying delays, Int. J. Mach. Learn. Cybern., 8 (2017), 823–835. https://doi.org/10.1007/s13042-015-0373-2 doi: 10.1007/s13042-015-0373-2
    [35] K. Mathiyalagan, J. H. Park, R. Sakthivel, Novel results on robust finite-time passivity for discrete-time delayed neural networks, Neurocomputing, 177 (2016), 585–593. https://doi.org/10.1016/j.neucom.2015.10.125
    [36] Y. Wang, A. Arumugam, Y. Liu, F. E. Alsaadi, Finite-time event-triggered non-fragile state estimation for discrete-time delayed neural networks with randomly occurring sensor nonlinearity and energy constraints, Neurocomputing, 384 (2020), 115–129. https://doi.org/10.1016/j.neucom.2019.12.038 doi: 10.1016/j.neucom.2019.12.038
    [37] Y. Yu, H. Dong, Z. Wang, W. Ren, F. E. Alsaadi, Design of non-fragile state estimators for discrete time-delayed neural networks with parameter uncertainties, Neurocomputing, 182 (2016), 18–24. https://doi.org/10.1016/j.neucom.2015.11.079 doi: 10.1016/j.neucom.2015.11.079
    [38] M. S. Mahmoud, F. M. Al-Sunni, Global stability results of discrete recurrent neural networks with interval delays, IMA J. Math. Control Inf., 29 (2012), 199–213. https://doi.org/10.1038/sj.bdj.2012.787 doi: 10.1038/sj.bdj.2012.787
    [39] B. Rahman, Y. N. Kyrychko, K. B. Blyuss, Dynamics of unidirectionally-coupled ring neural network with discrete and distributed delays, J. Math. Biol., 80 (2020), 1617–1653. https://doi.org/10.1007/s00285-020-01475-0 doi: 10.1007/s00285-020-01475-0
    [40] L. Liu, X. Chen, State estimation of quaternion-valued neural networks with leakage time delay and mixed two additive time-varying delays, Neural Process. Lett., 51 (2020), 2155–2178. https://doi.org/10.1007/s11063-019-10178-7 doi: 10.1007/s11063-019-10178-7
    [41] Y. Yu, H. Dong, Z. Wang, J. Li, Delay-distribution-dependent non-fragile state estimation for discrete-time neural networks under event-triggered mechanism, Neural Comput. Appl., 31 (2019), 7245–7256. https://doi.org/10.1007/s00521-018-3516-z doi: 10.1007/s00521-018-3516-z
    [42] H. Dong, N. Hou, Z. Wang, W. Ren, Variance-constrained state estimation for complex networks with randomly varying topologies, IEEE Trans. Neural Netw. Learn. Syst., 29 (2018), 2757–2768. https://doi.org/10.1109/TNNLS.2017.2700331 doi: 10.1109/TNNLS.2017.2700331
    [43] H. Dong, Z. Wang, D. W. C. Ho, H. Gao, Variance-constrained H filtering for a class of nonlinear time-varying systems with multiple missing measurements: The finite-horizon case, IEEE Trans. Signal Process., 58 (2010), 2534–2543. https://doi.org/10.1109/TSP.2010.2042489 doi: 10.1109/TSP.2010.2042489
    [44] B. Shen, Z. Wang, H. Shu, G. Wei, H filtering for uncertain time-varying systems with multiple randomly occurred nonlinearities and successive packet dropouts, Int. J. Robust Nonlinear Control, 21 (2011), 1693–1709. https://doi.org/10.1002/rnc.1662 doi: 10.1002/rnc.1662
    [45] I. R. Petersen, C. V. Hollot, A Riccati equation approach to the stabilization of uncertain linear systems, Automatica, 22 (1986), 397–411. https://doi.org/10.1016/0005-1098(86)90045-2 doi: 10.1016/0005-1098(86)90045-2
    [46] R. Li, X. Gao, J. Cao, Non-fragile state estimation for delayed fractional-order memristive neural networks, Appl. Math. Comput., 340 (2019), 221–233. https://doi.org/10.1016/j.amc.2018.08.031 doi: 10.1016/j.amc.2018.08.031
    [47] G. Sangeetha, K. Mathiyalagan, State estimation results for genetic regulatory networks with Levy-type noise, Chin. J. Phys., 68 (2020), 191–203. https://doi.org/10.1016/j.cjph.2020.09.007 doi: 10.1016/j.cjph.2020.09.007
    [48] K. Mathiyalagan, A. Shree Nidhi, T. Renugadevi, Boundary stabilization and state estimation of ODE-transport PDE with in-domain coupling, J. Frankl. Inst., 359 (2022), 1605–1625. https://doi.org/10.1016/j.jfranklin.2021.11.028 doi: 10.1016/j.jfranklin.2021.11.028
    [49] L. Zou, Z. Wang, D. H. Zhou, Moving horizon estimation with non-uniform sampling under component-based dynamic event-triggered transmission, Automatica, 120 (2020), 109154. https://doi.org/10.1016/j.automatica.2020.109154 doi: 10.1016/j.automatica.2020.109154
    [50] D. Ding, Z. Wang, Q. L. Han, A set-membership approach to event-triggered filtering for general nonlinear systems over sensor networks, IEEE Trans. Autom. Control, 65 (2020), 1792–1799. https://doi.org/10.1109/TAC.2019.2934389 doi: 10.1109/TAC.2019.2934389
    [51] X. Ge, Q. L. Han, Z. Wang, A threshold-parameter-dependent approach to designing distributed event-triggered H consensus filters over sensor networks, IEEE T. Cybern., 49 (2019), 1148–1159. https://doi.org/10.1109/TCYB.2017.2789296 doi: 10.1109/TCYB.2017.2789296
    [52] E. Tian, Z. Wang, L. Zou, D. Yue, Probabilistic-constrained filtering for a class of nonlinear systems with improved static event-triggered communication, Int. J. Robust Nonlinear Control, 29 (2019), 1484–1498. https://doi.org/10.1002/rnc.4447 doi: 10.1002/rnc.4447
    [53] X. Ge, Q. L. Han, Z. Wang, A dynamic event-triggered transmission scheme for distributed set-membership estimation over wireless sensor networks, IEEE T. Cybern., 49 (2019), 171–183. https://doi.org/10.1097/01.BMSAS.0000616184.98668.71 doi: 10.1097/01.BMSAS.0000616184.98668.71
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1892) PDF downloads(124) Cited by(0)

Figures and Tables

Figures(4)  /  Tables(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog