Research article

Complete convergence of moving average processes produced by negatively dependent random variables under sub-linear expectations

  • Received: 28 March 2023 Revised: 08 May 2023 Accepted: 08 May 2023 Published: 17 May 2023
  • MSC : 60F15, 60F05

  • Suppose that {ai,<i<} is an absolutely summable set of real numbers, {Yi,<i<} is a subset of identically distributed, negatively dependent random variables under sub-linear expectations. Here, we get complete convergence and Marcinkiewicz-Zygmund strong law of large numbers for the partial sums of moving average processes {Xn=i=aiYi+n,n1} produced by {Yi,<i<} of identically distributed, negatively dependent random variables under sub-linear expectations, complementing the relevant results in probability space.

    Citation: Mingzhou Xu. Complete convergence of moving average processes produced by negatively dependent random variables under sub-linear expectations[J]. AIMS Mathematics, 2023, 8(7): 17067-17080. doi: 10.3934/math.2023871

    Related Papers:

    [1] Mingzhou Xu, Xuhang Kong . Note on complete convergence and complete moment convergence for negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(4): 8504-8521. doi: 10.3934/math.2023428
    [2] Lunyi Liu, Qunying Wu . Complete integral convergence for weighted sums of negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(9): 22319-22337. doi: 10.3934/math.20231138
    [3] Mingzhou Xu . On the complete moment convergence of moving average processes generated by negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2024, 9(2): 3369-3385. doi: 10.3934/math.2024165
    [4] Mingzhou Xu, Kun Cheng, Wangke Yu . Complete convergence for weighted sums of negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2022, 7(11): 19998-20019. doi: 10.3934/math.20221094
    [5] Shuyan Li, Qunying Wu . Complete integration convergence for arrays of rowwise extended negatively dependent random variables under the sub-linear expectations. AIMS Mathematics, 2021, 6(11): 12166-12181. doi: 10.3934/math.2021706
    [6] Mingzhou Xu . Complete convergence and complete moment convergence for maximal weighted sums of extended negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(8): 19442-19460. doi: 10.3934/math.2023992
    [7] He Dong, Xili Tan, Yong Zhang . Complete convergence and complete integration convergence for weighted sums of arrays of rowwise $ m $-END under sub-linear expectations space. AIMS Mathematics, 2023, 8(3): 6705-6724. doi: 10.3934/math.2023340
    [8] Xiaocong Chen, Qunying Wu . Complete convergence and complete integral convergence of partial sums for moving average process under sub-linear expectations. AIMS Mathematics, 2022, 7(6): 9694-9715. doi: 10.3934/math.2022540
    [9] Chengcheng Jia, Qunying Wu . Complete convergence and complete integral convergence for weighted sums of widely acceptable random variables under the sub-linear expectations. AIMS Mathematics, 2022, 7(5): 8430-8448. doi: 10.3934/math.2022470
    [10] Baozhen Wang, Qunying Wu . Almost sure convergence for a class of dependent random variables under sub-linear expectations. AIMS Mathematics, 2024, 9(7): 17259-17275. doi: 10.3934/math.2024838
  • Suppose that {ai,<i<} is an absolutely summable set of real numbers, {Yi,<i<} is a subset of identically distributed, negatively dependent random variables under sub-linear expectations. Here, we get complete convergence and Marcinkiewicz-Zygmund strong law of large numbers for the partial sums of moving average processes {Xn=i=aiYi+n,n1} produced by {Yi,<i<} of identically distributed, negatively dependent random variables under sub-linear expectations, complementing the relevant results in probability space.



    Peng [1,2] introduced basic concepts of the sub-linear expectations space to describe the uncertainty in probability. Stimulated by the works of Peng [1,2], many scholars tried to discover the results under sub-linear expectations space, similar to those in classic probability space. Zhang [3,4] got exponential inequalities and Rosenthal's inequality under sub-linear expectations. Xu et al. [5], Xu and Kong [6] investigated complete convergence and complete moment convergence of weighted sums of negatively dependent random variables under sub-linear expectations. For more limit theorems under sub-linear expectations, the readers could refer to Zhang [7], Xu and Zhang [8,9], Wu and Jiang[10], Zhang and Lin [11], Zhong and Wu [12], Gao and Xu [13], Kuczmaszewska [14], Xu and Cheng [15,16], Zhang [17], Chen [18], Zhang [19], Chen and Wu [20], Xu et al. [5], Xu and Kong [6], and references therein.

    In classic probability space, Chen et al. [21] obtained limiting behavior of moving average processes under φ-mixing assumption. For references on complete moment convergence and complete convergence in probability space, the reader could refer to Hsu and Robbins [22], Chow [23], Hosseini and Nezakati [24], Meng et al. [25] and refercences therein. Inspired by the works of Chen et al. [21], Xu et al. [5], Xu and Kong [6], we try to discuss complete convergence for the partial sums of moving average processes generated by negatively dependent random variables under sub-linear expectations, and the relevant Marcinkiewicz-Zygmund strong law of large number, which complements the corresponding results in Chen et al. [21]. We also establish Conjecture 3.1 given by Xu and Kong [6] in some sense.

    We organize the remainders of this article as follows. We give relevant basic notions, concepts and properties, and cite relevant lemmas under sub-linear expectations in Section 2. In Section 3, we present our main results, Theorems 3.1–3.4, the proofs of which are postponed in Section 4.

    Hereafter, we use notions similar to that in the works by Peng [2], Zhang [4]. Assume that (Ω,F) is a given measurable space. Suppose that H is a set of all random variables on (Ω,F) fulfilling φ(X1,,Xn)H for X1,,XnH, and each φCl,Lip(Rn), where Cl,Lip(Rn) is the set of φ fulfilling

    |φ(x)φ(y)|C(1+|x|m+|y|m)(|xy|),x,yRn

    for C>0, mN relying on φ.

    Definition 2.1. A sub-linear expectation E on H is a functional E:HˉR:=[,] fulfilling the following: for every X,YH,

    (a) XY implies E[X]E[Y];

    (b) E[c]=c, cR;

    (c) E[λX]=λE[X], λ0;

    (d) E[X+Y]E[X]+E[Y] whenever E[X]+E[Y] is not of the form or +.

    V:F[0,1] is named to be a capacity if

    (a)V()=0, V(Ω)=1;

    (b)V(A)V(B), AB, A,BF.

    Furthermore, if V is continuous, then V obey

    (c) AnA yields V(An)V(A).

    (d) AnA yields V(An)V(A).

    V is said to be sub-additive when V(A+B)V(A)+V(B), A,BF.

    Under (Ω,H,E), set V(A):=inf{E[ξ]:IAξ,ξH}, AF (cf. Zhang [3]). V is a sub-additive capacity. Write

    CV(X):=0V(X>x)dx+0(V(X>x)1)dx.

    As in 4.3 of Zhang [3], throughout this paper, define an extension of E on the space of all random variables by

    E(X)=inf{E[Y]:XY,YH}.

    Then E is a sublinear expectation on the space of all random variables, E[X]=E[X], XH, and V(A)=E(IA), AF.

    Suppose X=(X1,,Xm), XiH and Y=(Y1,,Yn), YiH are two random vectors on (Ω,H,E). Y is named to be negatively dependent to X, if for ψ1 on Cl,Lip(Rm), ψ2 on Cl,Lip(Rn), E[ψ1(X)ψ2(Y)]E[ψ1(X)]E[ψ2(Y)] whenever ψ1(X)0, E[ψ2(Y)]0, E[ψ1(X)ψ2(Y)]<, E[|ψ1(X)|]<, E[|ψ2(Y)|]<, and either ψ1 and ψ2 are coordinatewise nondecreasing or ψ1 and ψ2 are coordinatewise nonincreasing (see Definition 2.3 of Zhang [3], Definition 1.5 of Zhang [4]). {Xn}n= is said to be negatively dependent, if Xn+l is negatively dependent to (Xl,Xl+1,,Xl+n1) for each n1, <l<. The existence of negatively dependent random variables {Xn}n= under sub-linear expectations could be guaranteed by Example 1.6 of Zhang [4] and Kolmogorov's existence theorem in classic probabililty space.

    Suppose X1 and X2 are two n-dimensional random vectors under (Ω1,H1,E1) and (Ω2,H2,E2) respectively. They are said to be identically distributed if for every ψCl,Lip(Rn),

    E1[ψ(X1)]=E2[ψ(X2)].

    {Xn}n=1 is called to be identically distributed if for every i1, Xi and X1 are identically distributed.

    Throughout this paper, we suppose that E is countably sub-additive, i.e., E(X)n=1E(Xn) could be implied by Xn=1Xn, X,XnH, and X0, Xn0, n=1,2,. Therefore E is also countably sub-additive. Write Sn=ni=1Xi, n1. Let C denote a positive constant which may change from line to line. I(A) or IA is the indicator function of A. The symbol axbx means that there exists two positive constants C1, C2 fulfilling C1|bx||ax|C2|bx|, x+ stands for max{x,0}, for xR.

    As in Zhang [4], if X1,X2,,Xn are negatively dependent random variables and f1, f2,,fn are all non increasing (or non decreasing) functions, then f1(X1), f2(X2),,fn(Xn) are negatively dependent random variables.

    We cite the following under sub-linear expectations.

    Lemma 2.1. (Cf. Lemma 4.5 (iii) of Zhang [3]) If E is countably sub-additive under (Ω,H,E), then for XH,

    E|X|CV(|X|).

    Lemma 2.2. (Cf. Theorem 2.1 of Zhang [4] and its proof there) Assume that p>1 and {Yn;n1} is a sequence of negatively dependent random varables with E[Yk]0, k0, under (Ω,H,E). Then for every n1, there exists a positive constant C=C(p) relying on p such that for p2,

    E[|max1innj=iYj|p]C{ni=1E|Yi|p+(ni=1EY2i)p/2}.
    E[((nj=1Yj)+)p]C{ni=1E|Yi|p+(ni=1EY2i)p/2}. (2.1)

    By (2.1) of Lemma 2.2 and similar proof of Lemma 2.4 of Xu et al. [5], we could get the following.

    Lemma 2.3. Assume that p>1 and {Yn;n1} is a sequence of negatively dependent random varables with E[Yk]0, k0, under (Ω,H,E). Then for every n1, there exists a positive constant C=C(p) relying on p such that for p2,

    E[max1in((ij=1Yj)+)p]C(logn)p{ni=1E|Yi|p+(ni=1EY2i)p/2}.

    Lemma 2.4. (Cf. Lemma 2.2 and its proof of Zhong and Wu [12])If XH, α>0, β>0, γ>0, η>0, CV(|X|αh(|X|β)(log(1+|X|))η)<, h() is a slowly varying function, then there exist two positive constants C1, C2 relying on α,β,γ,η such that

    C1CV(|X|αh(|X|β)(log(1+|X|))η)0V{|X|>γy}yα1h(yβ)dyC2CV(|X|αh(|X|β)(log(1+|X|))η)<.

    Proof. Here we give a detailed proof. By Lemma 2.1 of Zhong and Wu [12], h(x)=c(x)exp{x0f(u)udu}, where limxc(x)=c>0, c(x)0, limxf(x)=0. Set Z(x)=|x|αh(|x|β)(log(1+|x|))η and write the inverse function of Z(x) to be Z1(x). We get

    0V{|X|>γy}yα1h(yβ)(log(1+y))ηdy0V{|X|>γy}(1/α)(αγαyα1h((γy)β)+βγαyα1h((γy)β)f((γy)β))(log(1+γy))ηdy0V(|X|>Z1(x):=γy)dx=0V(|Xα|h(|X|β)(log(1+|X|))η>x)dx=CV(|Xα|h(|X|β)(log(1+|X|))η)<.

    Our main results are below.

    Theorem 3.1. Assume that h is a slowly varying function, 1p<2, and r>1. Suppose {Xn=i=aiYi+n,n1} is a moving average process produced by a sequence of negatively dependent random varables {Yi,<i<} with i=ai<, {{ai,<i<} is a subset of numbers being all non-negative, } and for fixed <i<, Yi is identically distributed as Y under sub-linear expectation space (Ω,H,E). Suppose that for some q>max{2,rp}, CV(|Y|rph(|Y|p)(log(1+|Y|))q)<. Then for all ε>0,

    (i) n=1nr2h(n)V{max1kn(ki=1(XiE(Xi)))εn1/p}<,

    n=1nr2h(n)V{max1kn(ki=1(XiE(Xi)))εn1/p}<,

    and

    (ii) n=1nr2h(n)V{supkn(ki=1(XiE(Xi)))/k1/pε}<,

    n=1nr2h(n)V{supkn(ki=1(XiE(Xi)))/k1/pε}<.

    Moreover, if E(Xi)=E(Xi), then for all ε>0,

    (iii) n=1nr2h(n)V{max1kn|ki=1(XiE(Xi))|εn1/p}<,

    n=1nr2h(n)V{supkn|ki=1(XiE(Xi))|/k1/pε}<

    Remark 3.1. Letting a0=1, ai=0 for i0, and h(x)=1 in Theorem 3.1, and by the similar proof of Corollary 3.1 of Xu and Kong [6], we deduce that Conjecture 3.1 of Xu and Kong [6] holds in some sense. Adapting the proof of Theorem 3.1, we see that (iii) of Theorem 3.1 still holds when the condition that for some q>max{2,rp}, CV(|Y|rph(|Y|p)(log(1+|Y|))q)< is reduced to that CV(|Y|rph(|Y|p))<, and the other conditions remained unchanged. The above discussion also could applies to that in Theorems 3.2, 3.3, 3.4.

    By Theorem 2.1 (b) of Zhang [4] and its proof there, similar proof of Theorem 3.1, we could get the following.

    Theorem 3.2. Suppose that in Theorem 3.1, with the condition that Ym is negatively dependent to (Ym+1,,Ym+l) for each <m< and l1 in place of the assumption that {Yi,<i<} is a sequence of negatively dependent random varables, the other conditions remained unchanged. Suppose that E(Y)=0 and for some q>max{2,rp}, CV(|Y|rph(|Y|p)(log(1+|Y|))q)<. Then all conclusions in Theorem 3.1 also hold.

    We study the occation r=1 in the following.

    Theorem 3.3. Assume that h is a slowly varying function and 1p<2. Suppose that {{ai,<i<} is a subset of numbers being all non-negative, } i=aθi<, where θ(0,1) if p=1 and θ=1 if 1<p<2. Assume that {Xn=i=aiYi+n,n1} is a moving average process produced by a sequence of negatively dependent random varables {Yi,<i<}, and for fixed <i<, Yi is identically distributed as Y under (Ω,H,E). Suppose that for some q>max{2,rp}, CV(|Y|ph(|Y|p)(log(1+|Y|)q))<. Then for all ε>0,

    n=1h(n)nV{max1knki=1(XiE(Xi))εn1/p}<,
    n=1h(n)nV{max1knki=1(XiE(Xi))εn1/p}<.

    In particular, if EY=E(Y), CV(|Y|p)< and V is continuous, then Sn/n1/pE(Y) a.s. V, i.e.,

    V{Ω{limnSn/n1/p=E(Y)}}=0,

    which is called the Marcinkiewicz-Zygmund type strong law of large numbers under sub-linear expectations,

    By Theorem 2.1 (b) of Zhang [4] and its proof there, similar proof of Theorem 3.3, we could get the following.

    Theorem 3.4. Suppose that in Theorem 3.1, with the condition that Ym is negatively dependent to (Ym+1,,Ym+l) for each <m< and l1 in place of the assumption that {Yi,<i<} is a sequence of negatively dependent random varables, the other conditions remained unchanged. Suppose that for some q>max{2,rp},

    CV(|Y|rph(|Y|p)(log(1+|Y|)q))<.

    Then all conclusions in Theorem 3.3 also hold.

    Remark 3.2. Theorems 3.3, 3.4 complement Theorem 1 for identically distributed, independent random variables under sub-linear expectations in Zhang and Lin [11].

    We obtain helpful lemmas firstly.

    Lemma 4.1. Suppose r>1, and 1p<2. Then for all ε>0,

    n=1nr2h(n)V{supkn(ki=1(XiE(Xi)))/k1/pε}n=1nr2h(n)V{max1kn(ki=1(XiE(Xi)))(ε/22/p)n1/p}.

    Proof. We get

    n=1nr2h(n)V{supkn(ki=1(XiE(Xi)))/k1/pε}=m=12m1n=2m1nr2h(n)V{supkn(ki=1(XiE(Xi)))/k1/pε}Cm=1V{supk2m1(ki=1(XiE(Xi)))/k1/pε}2m1n=2m12m(r2)h(2m)Cm=12m(r1)h(2m)V{supk2m1(ki=1(XiE(Xi)))/k1/pε}=Cm=12m(r1)h(2m)V{suplmmax2l1k<2l(ki=1(XiE(Xi)))ε2(l1)/p}Cm=12m(r1)h(2m)l=mV{max1k<2l(ki=1(XiE(Xi)))ε2(l1)/p}=Cl=1V{max1k<2l(ki=1(XiE(Xi)))ε2(l1)/p}lm=12m(r1)h(2m)Cl=12l(r1)h(2l)V{max1k<2l(ki=1(XiE(Xi)))ε2(l1)/p}
    Cl=12l+11n=2lnr2h(n)V{max1kn(ki=1(XiE(Xi)))(ε/22/p)n1/p}Cn=1nr2h(n)V{max1kn(ki=1(XiE(Xi)))(ε/22/p)n1/p}.

    Lemma 4.2. Assume that Y is a random variable fulfilling CV(|Y|rph(|Y|p))<, for some r1, p1. Write Y=n1/pI{Y<n1/p}+YI{|Y|n1/p}+n1/pI{Y>n1/p}. Suppose q>rp. Then

    n=1nr1q/ph(n)(logn)qE|Y|qCCV(|Y|rph(|Y|p)(log(1+|Y|))q).

    Proof. Since rq/p<0, from Lemma 2.1 and Lemma 2.4, follows that

    n=1nr1q/ph(n)(logn)qE|Y|qn=1nr1q/ph(n)(logn)qCV{|Y|q}n=1nr1q/ph(n)(logn)qn1/p0V{|Y|q>xq}qxq1dx
    C1yr1q/ph(y)(logy)q[10+y1/p1]V{|Y|q>xq}xq1dxdyC10V{|Y|q>x}dx1yr1q/ph(y)(logy)qdy+C1V{|Y|>x}xq1xpyr1q/ph(y)(logy)qdydxC+C1V{|Y|>x}h(xp)xrp1(logx)qdxCCV(|Y|rph(|Y|p)(log(1+|Y|))q)<.

    In the rest of this paper, let 12<μ<1, g(y)Cl,Lip(R) fulfilling 0g(y)1 for all y and g(y)=1 if |y|μ, g(y)=0, if |y|>1. We assume g(y) to be a decreasing function for y0. The next lemma gives a useful fact in the proofs of Theorems 3.1 and 3.3.

    Lemma 4.3. Assume that h is a slowly varying function and p1. Assume that {Xn,n1} is a moving average process produced by a sequence of negatively dependent random varables {Yi,<i<}, {{ai,<i<} is a subset of numbers being all non-negative, } and for fixed <i<, Yi is identically distributed as Y with E(Y)=0, CV(|Y|p)< under (Ω,H,E). For all ε>0, write

    I:=n=1nr2h(n)V{max1kni=aii+kj=i+1Yjεn1/p/2},

    and

    II:=n=1nr2h(n)V{i=aimax1kn(i+kj=i+1(YjE[Yj]))+εn1/p/4},

    where

    Yj=n1/pI{Yj<n1/p}+|Yj|I{|Yj|n1/p}+n1/pI{Yj>n1/p},
    Yj=YjYj=(Yj+n1/p)I{Yj<n1/p}+(Yjn1/p)I{Yj>n1/p}.

    Suppose I< and II<. Then

    n=1nr2h(n)V{max1knSkεn1/p}I+II<.

    Proof. Note that

    nk=1Xk=nk=1i=aiYi+k=i=aii+nj=i+1Yj.

    By i=ai<, E(Yj)=0, and |E(X)E(Y)|E|XY|, Lemma 2.1, we get

    n1/pi=aii+nj=i+1|EYj|=n1/pi=aii+nj=i+1|E[Yj]E[Yj]|n1/pi=aii+nj=i+1E|YjYj|Cn1/pE|Y1|=Cn1/pE|Y|Cn1/pE(n1/p)p1|Y|pCn11/pE|Y|p(1g(|Y|n1/p))CCV{|Y|p(1g(|Y|n1/p))}CCV{|Y|pI{|Y|μn1/p}}0,n0,

    where Y and Y is defined as Y1 and Y1 only with Y in place of Y1 throughout this paper. Therefore for n sufficiently large, we obtain

    n1/pi=aii+nj=i+1|EYj|<ε/4.

    Then

    n=1nr2h(n)V{max1knSkεn1/p}Cn=1nr2h(n)V{max1kni=aii+kj=i+1Yjεn1/p/2}
    +n=1nr2h(n)V{max1kni=aii+kj=i+1(YjE[Yj])εn1/p/4}Cn=1nr2h(n)V{max1kni=aii+kj=i+1Yjεn1/p/2}+n=1nr2h(n)V{i=aimax1kn(i+kj=i+1(YjE[Yj]))+εn1/p/4}=:I+II.

    Proof of Theorem 3.1. By Lemma 4.1, it is sufficient to establish that (i) holds. Without loss of restrictions, we assume that E(Y)=0. By Lemma 4.3, we just need to deduce that I< and II<.

    For I, combining Markov inequality under sub-linear expectations, Lemma 2.1, and Lemma 2.4 results in

    ICn=1nr2h(n)n1/pEmax1kn|i=aii+kj=i+1Yj|Cn=1nr11/ph(n)E|Y1|=Cn=1nr11/ph(n)E|Y1|=Cn=1nr11/ph(n)E|Y|Cn=1nr11/ph(n)CV{|Y|}Cn=1nr11/ph(n)0V{|Y|>x}dxCn=1nr11/ph(n)[V{|Y|>n1/p}n1/p+n1/pV{|Y|>x}dx]C1xr1h(x)V{|Y|>x1/p}dx+C1yr11/ph(y)y1/pV{|Y|>x}dxdyC1V{|Y|prh(|Y|p)>xrh(x)}d(xrh(x))+C1V{|Y|>x}dxxp1yr11/ph(y)dyCCV{|Y|prh(|Y|p)}+C1V{|Y|>x}xrp1h(xp)dxCCV{|Y|prh(|Y|p)}<.

    For II, by Markov inequality under sub-linear expectations, Hölder inequality, Lemma 2.3, we have for all q>2,

    IICn=1nr2h(n)nq/pE|i=aimax1kn(i+kj=i+1(YjE[Yj]))+|qCn=1nr2h(n)nq/pE[i=a11/qi(a1/qi|max1kn(i+kj=i+1(YjE[Yj]))+|)]q
    Cn=1nr2q/ph(n)(i=ai)q1i=aiE|max1kn(i+kj=i+1(YjE[Yj]))+|q=Cn=1nr2q/ph(n)(i=ai)q1i=aiEmax1kn((i+kj=i+1(YjE[Yj]))+)qCn=1nr2q/ph(n)(logn)q(nE|Y1|2)q/2+Cn=1nr1q/ph(n)(logn)qE|Y1|q=:II1+II2.

    To get II1<, we study two cases. If rp<2, take q>2, observe that in this case r2+q/2rq/2<1. By Lemma 2.1, we obtain

    II1=Cn=1nr2q/ph(n)(logn)qnq/2(E|Y1|2)q/2=Cn=1nr2q/ph(n)(logn)qnq/2(E|Y|2)q/2Cn=1nr2q/p+q/2h(n)(logn)q(E|Y|rp|Y|2rp)q/2Cn=1nr2q/p+q/2h(n)(logn)q(CV(|Y|rp))q/2n2rppq2Cn=1nr2+q/2rq/2h(n)(logn)q<.

    If rp2, take q>pr. Note in this case E|Y|2<CV(|Y|2)<. We get

    II1=Cn=1nr1q/ph(n)(logn)q(E|Y1|2)q/2=Cn=1nr1q/ph(n)(logn)q(E|Y|2)q/2Cn=1nr1q/p(logn)qh(n)<.

    By Lemma 4.2, we conclude that II2<. The proof of Theorem 3.1 is complete.

    Proof of Theorem 3.3. Without loss of restrictions, we assume that E(Y)=0. By Lemma 4.3, we just need to establish that I< and II< with r=1. For I, by Markov inequality under sub-linear expectations, Cr inequality, Lemma 2.1, and Lemma 2.4 (observe that θ<1), we get

    In=1n1h(n)nθ/pEmax1kn|i=aii+kj=i+1Yj|θCn=1h(n)nθ/pE|Y1|θ=Cn=1h(n)nθ/pE|Y1|θ=Cn=1h(n)nθ/pE|Y|θ
    Cn=1h(n)nθ/pCV(|Y|θ)Cn=1h(n)nθ/pCV{|Y|θI{|Y|>n1/p}}Cn=1nθ/ph(n)0V{|Y|θI{|Y|>n1/p}>x}dxC1yθ/ph(y)0V{|Y|θI{|Y|>y1/p}>x}dxdyC1yθ/ph(y)[yθ/p0+yθ/p]V{|Y|θI{|Y|>y1/p}>x}dxdyC1V{|Y|>y1/p}h(y)dy+C1V{|Y|θ>x}xp/θ1yθ/ph(y)dydxCCV(|Y|ph(|Y|p))+C1V{|Y|θ>x}xp/θ1h(xp/θ)dxCCV(|Y|ph(|Y|p))<.

    For II, from Markov inequality under sub-linear expectations, Hölder inequality, and Lemmas 2.1, 2.3, follows

    IICn=1n1h(n)n2/pE|i=aimax1kn(i+kj=i+1(YjE[Yj]))+|2Cn=1n1h(n)n2/pE(i=a1/2i(a1/2imax1kn(i+kj=i+1(YjE[Yj]))+))2Cn=1n12/ph(n)i=aii=aiE(max1kn(i+kj=i+1(YjE[Yj]))+)2=Cn=1n12/ph(n)i=aii=aiEmax1kn((i+kj=i+1(YjE[Yj]))+)2
    Cn=1n12/ph(n)(logn)2[nE[|Y1|2]]=Cn=1n2/ph(n)(logn)2E[|Y1|2]=:II1.

    By Lemma 4.2, we get II1<. Now we will get almost sure convergence under V. Without loss of restrictions, we assume E(Y1)=E(Y1)=0. By CV(|Y|p)<, we have

    n=1n1V{max1kn|Sk|>εn1/p}<,forallε>0.

    Therefore,

    >n=1n1V{max1kn|Sk|>εn1/p}=k=12k1n=2k1n1V{max1kn|Sk|>εn1/p}12V{max1m2k1|Sm|>ε2k/p}.

    By Borel-Cantelli lemma under sub-linear expectations (cf. Lemma 1 of Zhang and Lin [11]), we get

    2k/pmax1m2k|Sm|0,a.s.V,

    which yields Sn/n1/p0, a. s. V.

    We have obtained new results about complete convergence for moving average processes produced by negatively dependent random variables under sub-linear expectations. Results obtained in our article extend those for negatively dependent random variables under classical probability space, and Theorems 3.1–3.4 complement the results of Xu et al. [5], Xu and Kong [6], and in Remark 1 we establish Conjecture 3.1 of Xu and Kong in some sense.

    This study was supported by Science and Technology Research Project of Jiangxi Provincial Department of Education of China (No. GJJ2201041), Doctoral Scientific Research Starting Foundation of Jingdezhen Ceramic University (No.102/01003002031), Re-accompanying Funding Project of Academic Achievements of Jingdezhen Ceramic University (Nos. 215/20506277,215/20506135.)

    All authors state no conflict of interest in this article.



    [1] S. G. Peng, G-expectation, G-Brownian motion and related stochastic calculus of Itô type, Sto. Anal. Appl., 2 (2007), 541–561. https://doi.org/10.1007/978-3-540-70847-6_25 doi: 10.1007/978-3-540-70847-6_25
    [2] S. G. Peng, Nonlinear expectations and stochastic calculus under uncertainty, 1 Eds., Berlin: Springer, 2019. https://doi.org/10.1007/978-3-662-59903-7
    [3] L. X. Zhang, Exponential inequalities under the sub-linear expectations with applications to laws of the iterated logarithm, Sci. China Math., 59 (2016), 2503–2526. https://doi.org/10.1007/s11425-016-0079-1 doi: 10.1007/s11425-016-0079-1
    [4] L. X. Zhang, Rosenthal's inequalities for independent and negatively dependent random variables under sub-linear expectations with applications, Sci. China Math., 59 (2016), 751–768. https://doi.org/10.1007/s11425-015-5105-2 doi: 10.1007/s11425-015-5105-2
    [5] M. Z. Xu, K. Cheng, W. K. Yu, Complete convergence for weighted sums of negatively dependent random variables under sub-linear expectations, AIMS Math., 7 (2022), 19998–20019. https://doi.org/10.3934/math.20221094 doi: 10.3934/math.20221094
    [6] M. Z. Xu, X. H. Kong, Note on complete convergence and complete moment convergence for negatively dependent random variables under sub-linear expectations, AIMS Math., 8 (2023), 8504–8521. https://doi.org/10.3934/math.2023428 doi: 10.3934/math.2023428
    [7] L. X. Zhang, Donsker's invariance principle under the sub-linear expectation with an application to Chung's law of the iterated logarithm, Commun. Math. Stat., 3 (2015), 187–214. https://doi.org/10.1007/s40304-015-0055-0 doi: 10.1007/s40304-015-0055-0
    [8] J. P. Xu, L. X. Zhang, Three series theorem for independent random variables under sub-linear expectations with applications, Acta Math. Sin., Engl. Ser., 35 (2019), 172–184. https://doi.org/10.1007/s10114-018-7508-9 doi: 10.1007/s10114-018-7508-9
    [9] J. P. Xu, L. X. Zhang, The law of logarithm for arrays of random variables under sub-linear expectations, Acta Math. Appl. Sin. Engl. Ser., 36 (2020), 670–688. https://doi.org/10.1007/s10255-020-0958-8 doi: 10.1007/s10255-020-0958-8
    [10] Q. Y. Wu, Y. Y. Jiang, Strong law of large numbers and Chover's law of the iterated logarithm under sub-linear expectations, J. Math. Anal. Appl., 460 (2018), 252–270. https://doi.org/10.1016/j.jmaa.2017.11.053 doi: 10.1016/j.jmaa.2017.11.053
    [11] L. X. Zhang, J. H. Lin, Marcinkiewicz's strong law of large numbers for nonlinear expectations, Stat. Probab. Lett., 137 (2018), 269–276. https://doi.org/10.1016/j.spl.2018.01.022 doi: 10.1016/j.spl.2018.01.022
    [12] H. Y. Zhong, Q. Y. Wu, Complete convergence and complete moment convergence for weighted sums of extended negatively dependent random variables under sub-linear expectation, J. Inequal. Appl., 2017 (2017), 261. https://doi.org/10.1186/s13660-017-1538-1 doi: 10.1186/s13660-017-1538-1
    [13] F. Q. Gao, M. Z. Xu, Large deviations and moderate deviations for independent random variables under sublinear expectations, Sci. China Math., 41 (2011), 337–352. https://doi.org/10.1360/012009-879 doi: 10.1360/012009-879
    [14] A. Kuczmaszewska, Complete convergence for widely acceptable random variables under the sublinear expectations, J. Math. Anal. Appl., 484 (2020), 123662. https://doi.org/10.1016/j.jmaa.2019.123662 doi: 10.1016/j.jmaa.2019.123662
    [15] M. Z. Xu, K. Cheng, Convergence for sums of iid random variables under sublinear expectations, J. Inequal. Appl., 2021 (2021), 157. https://doi.org/10.1186/s13660-021-02692-x doi: 10.1186/s13660-021-02692-x
    [16] M. Z. Xu, K. Cheng, How small are the increments of G-Brownian motion, Stat. Probab. Lett., 186 (2022), 109464. https://doi.org/10.1155/2020/3145935 doi: 10.1155/2020/3145935
    [17] L. X. Zhang, Strong limit theorems for extended independent and extended negatively dependent random variables under sub-linear expectations, Acta Math. Sci. Engl. Ser., 42 (2022), 467–490. https://doi.org/10.1007/s10473-022-0203-z doi: 10.1007/s10473-022-0203-z
    [18] Z. J. Chen, Strong laws of large numbers for sub-linear expectations, Sci. China Math., 59 (2016), 945–954. https://doi.org/10.1007/s11425-015-5095-0 doi: 10.1007/s11425-015-5095-0
    [19] L. X. Zhang, On the laws of the iterated logarithm under sub-linear expectations, PUQR, 6 (2021), 409–460. https://doi.org/10.3934/puqr.2021020 doi: 10.3934/puqr.2021020
    [20] X. C. Chen, Q. Y. Wu, Complete convergence and complete integral convergence of partial sums for moving average process under sub-linear expectations, AIMS Math., 7 (2022), 9694–9715. https://doi.org/10.3934/math.2022540 doi: 10.3934/math.2022540
    [21] P. Y. Chen, T. C. Hu, A. Volodin, Limiting behaviour of moving average processes under φ-mixing assumption, Stat. Probab. Lett., 79 (2009), 105–111. https://doi.org/10.1016/j.spl.2008.07.026 doi: 10.1016/j.spl.2008.07.026
    [22] P. L. Hsu, H. Robbins, Complete convergence and the law of large numbers, Proc. Natl. Acad. Sci. USA, 33 (1947), 25–31. https://doi.org/10.1007/s10114-019-8205-z doi: 10.1007/s10114-019-8205-z
    [23] Y. S. Chow, On the rate of moment convergence of sample sums and extremes, Bull. Inst. Math. Acad. Sin., 16 (1988), 177–201.
    [24] S. M. Hosseini, A. Nezakati, Complete moment convergence for the dependent linear processes with random coefficients, Acta Math. Sin., Engl. Ser., 35 (2019), 1321–1333. https://doi.org/10.1007/s10114-019-8205-z doi: 10.1007/s10114-019-8205-z
    [25] B. Meng, D. C. Wang, Q. Y. Wu, Complete convergence and complete moment convergence for weighted sums of extended negatively dependent random variables, Commun. Stat.-Theor. M., 51 (2022), 3847–3863. https://doi.org/10.1080/03610926.2020.1804587 doi: 10.1080/03610926.2020.1804587
  • This article has been cited by:

    1. Mingzhou Xu, Xuhang Kong, Complete qth moment convergence of moving average processes for m-widely acceptable random variables under sub-linear expectations, 2024, 214, 01677152, 110203, 10.1016/j.spl.2024.110203
    2. Mingzhou Xu, On the complete moment convergence of moving average processes generated by negatively dependent random variables under sub-linear expectations, 2024, 9, 2473-6988, 3369, 10.3934/math.2024165
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1130) PDF downloads(25) Cited by(2)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog