Research article

On the complete moment convergence of moving average processes generated by negatively dependent random variables under sub-linear expectations

  • Received: 19 August 2023 Revised: 02 December 2023 Accepted: 28 December 2023 Published: 05 January 2024
  • MSC : 60F05, 60F15

  • The moving average processes Xk=i=ai+kYi are studied, where {Yi,<i<} is a double infinite sequence of negatively dependent random variables under sub-linear expectations, and {ai,<i<} is an absolutely summable sequence of real numbers. We establish the complete moment convergence of a moving average process under proper conditions, extending the corresponding results in classic probability space to those in sub-linear expectation space.

    Citation: Mingzhou Xu. On the complete moment convergence of moving average processes generated by negatively dependent random variables under sub-linear expectations[J]. AIMS Mathematics, 2024, 9(2): 3369-3385. doi: 10.3934/math.2024165

    Related Papers:

    [1] Lunyi Liu, Qunying Wu . Complete integral convergence for weighted sums of negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(9): 22319-22337. doi: 10.3934/math.20231138
    [2] Mingzhou Xu, Xuhang Kong . Note on complete convergence and complete moment convergence for negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(4): 8504-8521. doi: 10.3934/math.2023428
    [3] Mingzhou Xu . Complete convergence and complete moment convergence for maximal weighted sums of extended negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(8): 19442-19460. doi: 10.3934/math.2023992
    [4] Mingzhou Xu . Complete convergence of moving average processes produced by negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(7): 17067-17080. doi: 10.3934/math.2023871
    [5] Shuyan Li, Qunying Wu . Complete integration convergence for arrays of rowwise extended negatively dependent random variables under the sub-linear expectations. AIMS Mathematics, 2021, 6(11): 12166-12181. doi: 10.3934/math.2021706
    [6] Xiaocong Chen, Qunying Wu . Complete convergence and complete integral convergence of partial sums for moving average process under sub-linear expectations. AIMS Mathematics, 2022, 7(6): 9694-9715. doi: 10.3934/math.2022540
    [7] Mingzhou Xu, Kun Cheng, Wangke Yu . Complete convergence for weighted sums of negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2022, 7(11): 19998-20019. doi: 10.3934/math.20221094
    [8] He Dong, Xili Tan, Yong Zhang . Complete convergence and complete integration convergence for weighted sums of arrays of rowwise $ m $-END under sub-linear expectations space. AIMS Mathematics, 2023, 8(3): 6705-6724. doi: 10.3934/math.2023340
    [9] Chengcheng Jia, Qunying Wu . Complete convergence and complete integral convergence for weighted sums of widely acceptable random variables under the sub-linear expectations. AIMS Mathematics, 2022, 7(5): 8430-8448. doi: 10.3934/math.2022470
    [10] Haiwu Huang, Yuan Yuan, Hongguo Zeng . An extension on the rate of complete moment convergence for weighted sums of weakly dependent random variables. AIMS Mathematics, 2023, 8(1): 622-632. doi: 10.3934/math.2023029
  • The moving average processes Xk=i=ai+kYi are studied, where {Yi,<i<} is a double infinite sequence of negatively dependent random variables under sub-linear expectations, and {ai,<i<} is an absolutely summable sequence of real numbers. We establish the complete moment convergence of a moving average process under proper conditions, extending the corresponding results in classic probability space to those in sub-linear expectation space.



    Since Peng [1,2] initiated the concept of the sub-linear expectations space to study the uncertainty in probability, many scholars try to investigate the limit theorems under sub-linear expectations. Zhang [3,4,5] studied the famous exponential inequalities, Rosenthal's inequalities, and Donsker's invariance principle under sub-linear expectations. Chen and Wu [6] investigated complete convergence theorems for a moving average process generated by independent random variables under sub-linear expectations. Under sub-linear expectations, Xu et al. [7], Xu and Kong [8] obtained complete convergence and complete moment convergence of weighted sums of negatively dependent random variables under sub-linear expectations. For more limit theorems under sub-linear expectations, the readers could refer to Zhang [9], Xu and Zhang [10,11], Wu and Jiang[12], Zhang and Lin [13], Zhong and Wu [14], Hu et al. [15], Gao and Xu [16], Kuczmaszewska [17], Zhang [5], Chen [18], Zhang [19], Chen and Wu [20], Xu and Cheng [21,22], Xu et al. [23], Xu [24,25], and references therein.

    Guo et al. [26] studied the complete moment convergence of moving average processes under negative association assumptions. For more results about complete moment convergence of moving average processes, the interested reader could refer to Hossenni and Nezakati [27] and references therein. Motivated by the work of Guo et al. [26], Chen and Wu [6], and Xu et al. [23], we try to prove complete moment convergence of moving average processes generated by negatively dependent random variables under sub-linear expectations, complementing the corresponding results obtained in Guo et al. [26]. The differences between the works of Xu et al. [7], Xu and Kong [8], Xu [24,25], and the results in this article are that under sub-linear expectations the comlete convergence of weighted sums of negatively dependent or extended negatively dependent random variables are studied in Xu et al. [7], Xu and Kong [8], and Xu [], the complete convergence of moving average processes produced by negatively dependent random variables is studied in Xu [24], and the complete moment convergence of moving average processes generated by negatively dependent random variables is investigated here. The novelty here is that the results in this paper could imply those in Xu and Kong [8] and Xu [24] in some sense, and the results here extend the corresponding ones in probability space.

    The rest of this paper is organized as follows. We present some necessary basic notions, concepts and corresponding properties, and give necessary lemmas under sublinear expectations in the next section. In Section 3, we present our results, Theorems 3.1–3.3, and the proofs of which are given in Section 4.

    Hereafter, we use notions similar to that in the works by Peng [2], Zhang [4]. Assume that (Ω,F) is a given measurable space. Suppose that H is a set of all random variables on (Ω,F) fulfilling φ(X1,,Xn)H for X1,,XnH, and each φCl,Lip(Rn), where Cl,Lip(Rn) is the set of φ fulfilling

    |φ(x)φ(y)|C(1+|x|m+|y|m)(|xy|),x,yRn

    for C>0, mN relying on φ.

    Definition 2.1. A sub-linear expectation E on H is a functional E:HˉR:=[,] fulfilling the following: for every X,YH,

    (a) XY implies E[X]E[Y];

    (b) E[c]=c, cR;

    (c) E[λX]=λE[X], λ0;

    (d) E[X+Y]E[X]+E[Y] whenever E[X]+E[Y] is not of the form or +.

    Definition 2.2. We say that {Xn;n1} is stochastically dominated by a random variable X in (Ω,H,E), if there exists a constant C such that n1, for all non-negative hCl,Lip(R), E(h(Xn))CE(h(X)).

    V:F[0,1] is named to be a capacity if

    (a) V()=0, V(Ω)=1;

    (b) V(A)V(B), AB, A,BF.

    Furthermore, if V is continuous, then V obeys

    (c) AnA yields V(An)V(A).

    (d) AnA yields V(An)V(A).

    V is said to be sub-additive when V(AB)V(A)+V(B), A,BF.

    In (Ω,H,E), set V(A):=inf{E[ξ]:IAξ,ξH}, AF (cf. Zhang [3]). V is a sub-additive capacity. Write

    CV(X):=0V(X>x)dx+0(V(X>x)1)dx.

    As in 4.3 of Zhang [3], throughout this paper, define an extension of E on the space of all random variables by

    E(X)=inf{E[Y]:XY,YH}.

    Then E is a sublinear expectation on the space of all random variables, E[X]=E[X], XH, and V(A)=E(IA), AF.

    Suppose X=(X1,,Xm), XiH and Y=(Y1,,Yn), YiH are two random vectors on (Ω,H,E). Y is named to be negatively dependent to X, if for ψ1 on Cl,Lip(Rm), ψ2 on Cl,Lip(Rn), E[ψ1(X)ψ2(Y)]E[ψ1(X)]E[ψ2(Y)] whenever ψ1(X)0, E[ψ2(Y)]0, E[|ψ1(X)ψ2(Y)|]<, E[|ψ1(X)|]<, E[|ψ2(Y)|]<, and either ψ1 and ψ2 are coordinatewise nondecreasing or ψ1 and ψ2 are coordinatewise nonincreasing (see Definition 2.3 of Zhang [3], Definition 1.5 of Zhang [4]). {Xn}n= is said to be negatively dependent, if Xn+l is negatively dependent to (Xl,Xl+1,,Xl+n1) for each n1, <l<.

    Suppose X1 and X2 are two n-dimensional random vectors in (Ω1,H1,E1) and (Ω2,H2,E2) respectively. They are said to be identically distributed if for every ψCl,Lip(Rn),

    E1[ψ(X1)]=E2[ψ(X2)]. 

    {Xn;n1} is called to be identically distributed if for every i1, Xi and X1 are identically distributed.

    Throughout this paper, we suppose that E is countably sub-additive, i.e., E(X)n=1E(Xn) could be implied by Xn=1Xn, X,XnH, and X0, Xn0, n=1,2,. Therefore E is also countably sub-additive. Moreover V is also countably sub-additive (cf. Zhang [3]). Let C denote a positive constant which may change from line to line. I(A) or IA is the indicator function of A. The symbol axbx means that there exists two positive constants C1, C2 fulfilling C1|bx||ax|C2|bx|, x+ stands for max{x,0}, x=(x)+, for xR, ab=max{a,b}, for a,bR.

    As in Zhang [4], if X1,X2,,Xn are negatively dependent random variables and f1(x),f2(x),,fn(x)Cl,Lip(R) are all non increasing (or non decreasing) functions, then f1(X1), f2(X2),,fn(Xn) are negatively dependent random variables.

    We cite the following under sub-linear expectations.

    Lemma 2.1. (cf. Lemma 4.5 (iii) of Zhang [3]) If E is countably sub-additive under (Ω,H,E), then for XH,

    E|X|CV(|X|).

    Lemma 2.2. (cf. Theorem 2.1 in Zhang [4]) Write Sk=Y1++Yk, S0=0. Suppose that Yk+1 is negatively dependent to (Y1,,Yk) for k=1,2,,n1, or Yk is negatively dependent to (Yk+1,,Yn) for k=0,,n1 in sub-linear expectation space (Ω,H,E). Then for p2,

    E[maxkn|Sk|p]Cp{nk=1E[|Yk|p]+(nk=1E[|Yk|2])p/2+(nk=1[|E(Yk)|+|E(Yk)|])p}. (2.1)

    By Lemma 2.2 of Zhong and Wu [14], the following lemma holds.

    Lemma 2.3. Suppose YH, r>0, p>0, and l(x) is a slowly varying function. (i) Then for any c>0,

    CV{|Y|rl(|Y|p)}<n=1nr/p1l(n)V(|Y|>cn1/p)<.

    (ii) Suppose CV{|Y|rl(|Y|p)}<. Then for any θ>1 and c>0,

    k=1θkr/pl(θk)V(|Y|>cθk/p)<.

    Theorem 3.1. Assume that Xn=i=ai+nYi, n1, where {ai,<i<} is a sequence of real numbers fulfilling i=|ai|<, {Yi,<i<} is a sequence of negatively dependent random variables, and {Yi,<i<} is stochastically dominated by Y in sub-linear expectation space (Ω,H,E). Let l(x) be a slowly varying function and 1p<2, r1+p/2. Suppose that E(Yi)=E(Yi)=0 for all <i<, and CV(|Y|r(1l(|Y|p)))<. Then

    n=1nr/p21/(pt)l(n)CV{[max1kn|ki=1Xi|1/tϵn1/(pt)]+}<, for all ϵ>0 and t>1r, (3.1)

    and

    n=1nr/p2l(n)CV{[supkn|k1/pki=1Xi|1/tϵ]+}<, for all ϵ>0 and t>1r. (3.2)

    Theorem 3.2. Suppose that Xn=i=ai+nYi, n1, where {ai,<i<} is a sequence of real numbers fulfilling i=|ai|<, {Yi,<i<} is a sequence of negatively dependent random variables, and {Yi,<i<} is stochastically dominated by Y in sub-linear expectation space (Ω,H,E). Let l(x) be a non-decreasing and slowly varying function. Assume 1p<2, r>1+p/2. Suppose that E(Yi)=E(Yi)=0, for all <i< and CV(|Y|1/t(1l(|Y|p)))<. Then

    n=1nr/p21/(pt)l(n)CV{[max1kn|ki=1Xi|1/tϵn1/(pt)]+}<, for all ϵ>0 and t<1r, (3.3)

    and

    n=1nr/p2l(n)CV{[supkn|k1/pki=1Xi|1/tϵ]+}<, for all ϵ>0 and t<1r. (3.4)

    Theorem 3.3. Assume that Xn=i=ai+nYi, n1, where {ai,<i<} is a sequence of real numbers fulfilling i=|ai|<, {Yi,<i<} is a sequence of negatively dependent random variables, and {Yi,<i<} is stochastically dominated by Y in sub-linear expectation space (Ω,H,E). Assume that l(x) is a slowly varying function and 1<p<2. Suppose E(Yi)=E(Yi)=0 for <i< and CV(|Y|p(1l(|Y|p)))<. Then

    n=1n11/(pt)l(n)CV{[max1kn|ki=1Xi|1/tϵn1/(pt)]+}<, for all ϵ>0 and t>1p. (3.5)

    As in Remark 2.3 of Guo et al. [26] and Remark 1.2 of Li and Zhang [28], by Theorems 3.1, 3.2, we could obtain the following corollaries.

    Corollary 3.1. Under the assumptions of Theorem 3.1, and assume that CV(|Y|r(1l(|Y|p)))<. Then

    n=1nr/p2l(n)V{max1kn|ki=1Xi|>ϵn1/p}< for ϵ>0;
    n=1nr/p2l(n)V{supkn|k1/pki=1Xi|>ϵ}< for ϵ>0.

    Corollary 3.2. Under the assumptions of Theorem 3.3, and assume that CV(|Y|p(1l(|Y|p)))<. Then

    n=1n1l(n)V{max1kn|ki=1Xi|>ϵn1/p}< for ϵ>0.

    Remark 3.1. In Theorems 3.1, 3.2, 3.3, Corollaries 3.1, 3.2, we all assume that E(Yj)=E(Yj)=0, j1. Readers may wonder what the intrinsic difference between the sub-linear expectation and linear expectation in probability space is? The following example heuristically implies the diffenrence in some extent. Suppose that Y1 is G-normally distributed, i.e., for a,b>0, aY1+bˉY1 and a2+b2Y1 are identically distributed, where ˉY1 and Y1 are independent and identically distributed (cf. Definition 2.2.8 and Remark 2.2.9 of Peng [2]). We know that E(Y1)=E(Y1)=0 (cf. Remark 2.2.5 of Peng [2]). Assume that E(Y21)=1>E(Y21)>0. Then by the Remarks 3 and 14 of Hu [29], we know that E(Y2n+11)=E(Y2n+11)>0, n1. Hence, for any n2, E(Yn1)E(Yn1) (cf. Proposition 2.2.15 of Peng [2]).

    Hereafter, as in Chen and Wu [6], we define some useful functions. Assume that 21/p<μ<1, g(y)Cl,Lip(R) is a decreasing function for y0, 0g(y)1 for all y and g(y)=1 if |y|μ, g(y)=0 if |y|>1. We see that

    I(|y|μ)g(|y|)I(|y|1),I(|y|>1)1g(|y|)I(|y|>μ). (4.1)

    Define gj(y)Cl,Lip(R), j1 such that 0gj(y)1 for all y and gj(|y|2j/p)=1 if 2(j1)/p<|y|2j/p, gj(|y|2j/p)=0 if |y|μ2(j1)/p or |y|>(1+μ)2j/p. We see that

    I(2(j1)/p<|Y|2j/p)gj(|Y|2j/p)I(μ2(j1)/p<|Y|(1+μ)2j/p), (4.2)
    |Y|αg(|Y|2k/p)1+kj=1|Y|αgj(|Y|2j/p),α>0, (4.3)
    |Y|α(1g(|Y|2k/p))j=k|Y|αgj(|Y|2j/p),α>0. (4.4)

    Proof of Theorem 3.1. Here we adopt some ideas from the proofs of Theorem 2.1 in Guo et al. [26]. Write Y(1)xi=YiI(|Yi|<x)xI(Yix)+xI(Yix), Y(2)xi=YiY(1)xi, Y(1)x=YI(|Y|<x)xI(Yx)+xI(Yx), Y(2)x=YY(1)x for any x0 and <i<. Note that

    nk=1Xk=nk=1i=ai+kYi=i=aink=1Yik=i=aii1j=inYj.

    We see that

    n=1nr/p21/(pt)l(n)CV{[max1kn|ki=1Xi|1/tϵn1/(pt)]+}=n=1nr/p21/(pt)l(n)ϵn1/(pt)V{max1kn|ki=1Xi|>xt}dx(letting y=(x/ϵ)t)=n=1nr/p21/(pt)l(n)n1/pV{max1kn|ki=1Xi|>ϵty}ϵty1t1dyCn=1nr/p21/(pt)l(n)n1/px1t1V{max1kn|i=aii1j=ikY(2)xj|xϵt2}dx+Cn=1nr/p21/(pt)l(n)n1/px1t1V{max1kn|i=aii1j=ikY(1)xj|xϵt2}dx:=I1+I2. (4.5)

    For I1, observe that r/p11/(pt)>1 and CV(|Y|rl(|Y|p))<, by Lemmas 2.2 and 2.3, Markov inequality under sub-linear expectations, (4.1), (4.4), we get

    I1Cn=1nr/p21/(pt)l(n)n1/px1t2E[max1kn|i=aii1j=ikY(2)xj|]dxCn=1nr/p21/(pt)l(n)n1/px1t2max<i<E[|i1j=in|Yj|(1g(|Yj|x))|]dx=Cn=1nr/p21/(pt)l(n)n1/px1t2max<i<E[|i1j=in|Yj|(1g(|Yj|x))|]dxCn=1nr/p11/(pt)l(n)k=n(k+1)1/pk1/px1t2E(|Y|(1g(|Y|x)))dxCn=1nr/p11/(pt)l(n)k=nk1/(pt)1/p1E(|Y|(1g(|Y|k1/p)))Ck=1k1/(pt)1/p1E(|Y|(1g(|Y|k1/p)))kn=1nr/p11/(pt)l(n)Ck=1kr/p11/pl(k)E(|Y|(1g(|Y|k1/p)))=Cn=02n+11k=2nkr/p11/pl(k)E(|Y|(1g(|Y|k1/p)))Cn=12n(r/p1/p)l(2n)E(|Y|(1g(|Y|2n/p)))Cn=12n(r/p1/p)l(2n)E(j=n|Y|gj(|Y|2j/p))Cn=12n(r/p1/p)l(2n)j=nE(|Y|gj(|Y|2j/p))=Cj=1E(|Y|gj(|Y|2j/p))jn=12n(r/p1/p)l(2n)Cj=12jr/pl(2j)V{|Y|>μ2(j1)/p}<. (4.6)

    Next we establish I2. By Lemma 2.2, Markov's inequality under sub-linear expectations, Hölder inequality, we see that for q2,

    I2Cn=1nr/p21/(pt)l(n)n1/px1t1xqE[max1kn|i=aii1j=ikY(1)xj|q]dxCn=1nr/p21/(pt)l(n)n1/px1t1qE[max1kni=(|ai|11/q)(|ai|1/q)|i1j=ikY(1)xj|q]dxCn=1nr/p21/(pt)l(n)n1/px1t1q(i=|ai|)q1(i=|ai|E(max1kn|i1j=ikY(1)xj|q))dxCn=1nr/p21/(pt)l(n)n1/px1t1qmax<i<E(max1kn|i1j=ikY(1)xj|q)dxCn=1nr/p21/(pt)l(n)n1/px1t1qmax<i<(i1j=inE|Y(1)xj|q)dx+Cn=1nr/p21/(pt)l(n)n1/px1t1qmax<i<(i1j=inE(|Y(1)xj|2))q/2dx+Cn=1nr/p21/(pt)l(n)n1/px1t1qmax<i<(i1j=in[|E(Y(1)xj)|+|E(Y(1)xj)|])qdx:=I21+I22+I23.

    For I21, take q>max{r,2}, by Lemma 2.3, (4.1), (4.2) and (4.3), and x>0, f():=||qI(||x)+xqI(||>x)Cl,Lip(R), we see that

    I21Cn=1nr/p21/(pt)l(n)n1/px1t1q(nE|Y(1)x|q)dxCn=1nr/p11/(pt)l(n)n1/px1t1q[xqE(1g(|Y|x))+E(|Y|qg(μ|Y|x))]dx=Cn=1nr/p11/(pt)l(n)m=n(m+1)1/pm1/px1t1E(1g(|Y|x))dx+Cn=1nr/p11/(pt)l(n)m=n(m+1)1/pm1/px1t1qE(|Y|qg(μ|Y|x))dxCn=1nr/p11/(pt)l(n)m=nm1tp1V{|Y|>μm1/p}+Cn=1nr/p11/(pt)l(n)m=nm1tp1q/pE(|Y|qg(μ|Y|(m+1)1/p))m=1mrp1l(m)V{|Y|>μm1/p}+Cm=1m1tp1q/pE(|Y|qg(μ|Y|(m+1)1/p))mn=1nr/p11/(pt)l(n)Ck=02k+11m=2kmrp1q/pl(m)E(|Y|qg(μ|Y|(m+1)1/p))Ck=12krpkq/pl(2k)E(|Y|qg(μ|Y|2(k+1)/p))Ck=12krpkq/pl(2k)E(1+kj=1|Y|qgj(μ|Y|2(j+1)/p))Ck=12k(r/pq/p)l(2k)+Ck=12krpkq/pl(2k)kj=1E(|Y|qgj(μ|Y|2(j+1)/p))Cj=12jq/pV{|Y|>2j/p}k=j2k(r/pq/p)l(2k)Cj=12jr/pl(2j)V{|Y|>2j/p}<. (4.7)

    For I22, we study the following two cases. If r2, we take q>2. Note that r/p(r/p1)q/2<1 and r/p21/(pt)+q/2>1. We get

    I22Cn=1nr/p21/(pt)+q/2l(n)n1/px1t1q(E|Y(1)x|2)q/2dxCn=1nr/p21/(pt)+q/2l(n)n1/px1t1qx(2r)q/2(E|Y(1)x|r)q/2dxCn=1nr/p(r/p1)q/22(E|Y|r)q/2Cn=1nr/p(r/p1)q/22(CV(|Y|r))q/2<. (4.8)

    If r>2, we take q>max{2p(r/p1)/(2p),t1}, then r/pq/p+q/2<1. Note that E(Y2)<CV(Y2)CCV(|Y|rl(|Y|p))< in this case. Therefore, we get

    I22Cn=1nr/p21/(pt)+q/2l(n)n1/px1t1qdxCn=1nr/p2q/p+q/2l(n)<. (4.9)

    Combining (4.8) and (4.9) results in I22<.

    For I23, we take q>2. Observe that r1+p/2>p. By E(Yi)=E(Yi)=0, Proposition 1.3.7 of Peng (2019), and Lemma 2.1, we see that

    I23Cn=1nr/p21/(pt)l(n)k=n(k+1)1/pk1/px1t1qmax<i<(i1j=in[E|Y(1)xjYj|+E|Y(1)xj+Yj|])qdxCn=1nr/p21/(pt)l(n)k=n(k+1)1/pk1/px1t1qmax<i<(i1j=inE|Y(1)xjYj|)qdxCn=1nr/p21/(pt)+ql(n)k=n(k+1)1/pk1/px1t1q(E|Y|(1g(|Y|x)))qdxCk=1k1tp1q/p(E|Y|(1g(|Y|k1/p)))qkn=1nr/p21/(pt)+ql(n)Ck=1k1/(pt)1q/p(E|Y|rl(|Y|p)/(k(r1)/pl(k)))qkr/p11/(pt)+ql(k)Ck=1k(r/p1)(q1)1/l(k)q1(CV{|Y|rl(|Y|p)})q<. (4.10)

    Hence, by (4.5) and (4.6)–(4.10), we establish (3.1).

    Now we prove (3.2). By r/p>1 and the countable sub-additivity of V, we obtain

    n=1nr/p2l(n)CV{[supkn|k1/pki=1Xi|1/tϵ]+}=n=1nr/p2l(n)ϵV{supkn|k1/pki=1Xi|1/t>x}dx=j=02j+11n=2jnr/p2l(n)ϵV{supkn|k1/pki=1Xi|1/t>x}dxCj=02j(r/p1)l(2j)ϵV{supk2j|k1/pki=1Xi|1/t>x}dxCj=02j(r/p1)l(2j)=jϵV{sup2k2+1|ki=1Xi|1/t>x2/(pt)}dxC=02(r/p1)l(2)ϵV{sup2k2+1|ki=1Xi|1/t>x2/(pt)}dxCn=0nr/p2l(n)ϵV{sup1kn|ki=1Xi|1/t>xn1/(pt)}dx(letting ϵ=ϵ21/(pt))Cn=0nr/p21/(pt)l(n)ϵn1/(pt)V{sup1kn|ki=1Xi|1/t>x}dxCn=0nr/p21/(pt)l(n)CV{[max1kn|ki=1Xi|1/tϵn1/(pt)]+}<. (4.11)

    Hence (3.2) is proved.

    Proof of Theorem 3.2. As in the proof of Theorem 3.1, it is sufficient to prove that I1<, I21<, I22<, I23<. Indeed, observe that r/p11/(pt)<1 yields n=1nr/p11/(pt)<. Therefore, by the proofs of (4.6) and (4.4), and Lemma 2.3, we get

    I1Ck=1k1/(pt)1/p1E[|Y|(1g(|Y|k1/p))]kn=1nr/p11/(pt)l(n)Ck=1k1/(pt)1/p1E[|Y|(1g(|Y|k1/p))]=Cn=02n+11k=2nk1/(pt)1/p1E[|Y|(1g(|Y|k1/p))]Cn=12n(1/(pt)1/p)E[|Y|(1g(|Y|2n/p))]Cn=12n(1/(pt)1/p)E[j=n|Y|gj(|Y|2j/p)]Cj=1E[|Y|gj(|Y|2j/p)]jn=12n(1/(pt)1/p)Cj=12j/(pt)E[gj(|Y|2j/p)]Cj=12j/(pt)V{|Y|>μ2(j1)/p}<.

    For I22, I23, we take q>max{t1,2(rp)/(2p),2+2(1/tr)/p}. By the proofs of (4.8), (4.9) and (4.10), we can obtain I22<, I23<.

    For I21, take q>max{2,t1}, by the proof of (4.7), and (4.3), we see that

    I21Cm=1m1/(pt)q/p1E[|Y|qg(μ|Y|(m+1)1/p)]mn=1nr/p11/(pt)l(n)Ck=02k+11m=2km1/(pt)q/p1E[|Y|qg(μ|Y|(m+1)1/p)]Ck=12k(1/(pt)q/p)E[|Y|qg(μ|Y|2(k+1)/p)]Ck=12k(1/(pt)q/p)E[1+kj=1|Y|qgj(μ|Y|2(j+1)/p)]Ck=12k(1/(pt)q/p)+Ck=12k(1/(pt)q/p)kj=1E[|Y|qgj(μ|Y|2(j+1)/p)]Cj=12jq/pV{|Y|>2j/p}k=j2k(1/(pt)q/p)Cj=12j/(tp)V{|Y|>2j/p}<.

    Proof of Theorem 3.3. By the proof of (4.5), we get

    n=1n11/(pt)l(n)CV{[max1kn|ki=1Xi|1/tϵn1/(pt)]+}Cn=1n11/(pt)l(n)n1/px1t1V{max1kn|i=aii1j=ikY(2)xj|xϵt2}dx+Cn=1n11/(pt)l(n)n1/px1t1V{max1kn|i=aii1j=ikY(1)xj|xϵt2}dx:=J1+J2. (4.12)

    Observe that pt>1 and CV(|Y|1/tl(|Y|p))<, by Markov's inequality under sub-linear expectations and Lemmas 2.2, 2.3, (4.4), we have

    J1Cn=1n11/(pt)l(n)n1/px1t2Emax1kn|i=aii1j=ikY(2)xj|dxCn=1n1/(pt)l(n)k=n(k+1)1/pk1/px1t2E(|Y(2)x|)dxCn=1n1/(pt)l(n)k=n(k+1)1/pk1/px1t2E(|Y|(1g(|Y|x)))dxCk=1k1/(tp)1/p1E(|Y|(1g(|Y|k1/p)))kn=1n1/(pt)l(n)Ck=1k1/pl(k)E(|Y|(1g(|Y|k1/p)))=Cn=02n+11k=2nk1/pl(k)E(|Y|(1g(|Y|k1/p)))Cn=12(11/p)nl(2n)E(|Y|(1g(|Y|2n/p)))Cn=12(11/p)nl(2n)E(j=n|Y|gj(|Y|2j/p))Cj=1E(|Y|gj(|Y|2j/p))jn=12(11/p)nl(2n)Cj=12jl(2j)V{|Y|>μ21/p2j/p}<. (4.13)

    For J2, as in the proof of I2, choose q=2, by (2.1), we get

    J2Cn=1n11/(pt)l(n)n1/px1t3max<i<(i1j=inE(|Y(1)xj|2))dx+Cn=1n11/(pt)l(n)n1/px1t3max<i<(i1j=in[|EY(1)xj|+|E(Y(1)xj)|])2dx=:J21+J22.

    By Lemma 2.3, (4.1), (4.3), we conclude that

    J21=Cn=1n1/(pt)l(n)n1/px1t3E(|Y(1)x|2)dx=Cn=1n1/(pt)l(n)n1/px1t3[x2E(1g(|Y|x))+E|Y|2g(μ|Y|x)]dx=Cn=1n1/(pt)l(n)m=n(m+1)1/pm1/px1t1E(1g(|Y|x))dx+Cn=1n1/(pt)l(n)m=n(m+1)1/pm1/px1t3E|Y|2g(μ|Y|x)dxCm=1m1tp1E(1g(|Y|m1/p))mn=1n1/(pt)l(n)+Cm=1m1tp2p1E|Y|2g(μ|Y|(m+1)1/p)mn=1n1/(pt)l(n)Cm=1l(m)V{|Y|>μm1/p}+Cm=1m2pl(m)E|Y|2g(μ|Y|(m+1)1/p)=Cn=02n+11m=2nm2pl(m)E|Y|2g(μ|Y|(m+1)1/p)Cn=12(12/p)nl(2n)E|Y|2g(μ|Y|(2)(n+1)/p)Cn=12(12/p)nl(2n)E[1+nj=1|Y|2gj(μ|Y|(2)(j+1)/p)]Cn=12(12/p)nl(2n)+Cn=12(12/p)nl(2n)nj=1E[|Y|2gj(μ|Y|(2)(j+1)/p)]Cj=122j/pV{|Y|>2j/p}n=j2(12/p)nl(2n)Cj=12jl(2j)V{|Y|>2j/p}<.

    By E(Yi)=E(Yi)=0, Proposition 1.3.7 of Peng (2019), (4.1), and Lemma 2.1, we see that

    J22Cn=1n11/(pt)l(n)n1/px1t3[nE|Y|(1g(|Y|x))]2dx=Cn=1n11/(pt)l(n)k=n(k+1)1/pk1/px(1/t)3[E|Y|(1g(|Y|k1/p))]2dxCk=1k12/pl(k)[E|Y|(1g(|Y|k1/p))]2Ck=1k12/pl(k)[CV(|Y|(1g(|Y|k1/p)))]2Ck=1k12/pl(k)[CV(|Y|I(|Y|>μk1/p))]2Ck=1k12/pl(k)[μk1/p0V(|Y|>μk1/p)dy+μk1/pV(|Y|>y)dy]2Ck=1kl(k)[V{|Y|>μk1/p}]2+Ck=1k12/pl(k)[μk1/pV{|Y|>y}dy]2C1xl(x)V2{|Y|>μx1/p}dx+C1x12/pl(x)dxμx1/pV{|Y|>y}dyyμx1/pV{|Y|>z}dzC1(xl(x)V{|Y|pl(|Y|p)>Cxl(x)})V{|Y|p>Cx}dx+CμV{|Y|>y}dyyμV{|Y|>z}dz(z/μ)p1x12/pl(x)dxC1V{|Y|p>Cx}dx+CCμV{|Y|>y}dyyμV{|Y|>z}z2p2l(zp)dzCCV{|Y|p}+CCμV{|Y|>y}dyyμE(|Y|p)zpzpzp2l(zp)dzC+CμV{|Y|>y}CV{|Y|p}yp1l(yp)dyCCV{|Y|pl(|Y|p)}<.

    Hence, (3.5) is proved.

    We have obtained new results about complete moment convergence for maximal partial sums of moving average processes produced by negatively dependent random variables under sub-linear expectations. Results obtained in our article generalize those for negatively dependent random variables in probability space, and Theorems 3.1–3.3 complement the results of Xu et al. [7,23], Xu and Kong [8], and Xu [24] in some sense.

    This study was supported by Science and Technology Research Project of Jiangxi Provincial Department of Education of China (No. GJJ2201041), Doctoral Scientific Research Starting Foundation of Jingdezhen Ceramic University (No. 102/01003002031), Academic Achievement Re-cultivation Project of Jingdezhen Ceramic University (Grant No. 215/20506277).

    Artificial Intelligence tools were not used.

    The author declares that there are no conflicts of interest.



    [1] S. G. Peng, G-expectation, G-Brownian motion and related stochastic calculus of Itô type, In: Stochastic Analysis and Applications, Berlin, Heidelberg: Springer, 2007,541–561. https://doi.org/10.1007/978-3-540-70847-6_25
    [2] S. G. Peng, Nonlinear expectations and stochastic calculus under uncertainty, Berlin: Springer, 2019. https://doi.org/10.1007/978-3-662-59903-7
    [3] L. X. Zhang, Exponential inequalities under the sub-linear expectations with applications to laws of the iterated logarithm, Sci. China Math., 59 (2016), 2503–2526. https://doi.org/10.1007/s11425-016-0079-1 doi: 10.1007/s11425-016-0079-1
    [4] L. X. Zhang, Rosenthal's inequalities for independent and negatively dependent random variables under sub-linear expectations with applications, Sci. China Math., 59 (2016), 751–768. https://doi.org/10.1007/s11425-015-5105-2 doi: 10.1007/s11425-015-5105-2
    [5] L. X. Zhang, Strong limit theorems for extended independent and extended negatively dependent random variables under sub-linear expectations, Acta Math. Sci., 42 (2022), 467–490. https://doi.org/10.1007/s10473-022-0203-z doi: 10.1007/s10473-022-0203-z
    [6] X. C. Chen, Q. Y. Wu, Complete convergence theorems for moving average process generated by independent random variables under sub-linear expectations, Commun. Stat.-Theory Methods, 2023. https://doi.org/10.1080/03610926.2023.2220449 doi: 10.1080/03610926.2023.2220449
    [7] M. Z. Xu, K. Cheng, W. K. Yu, Complete convergence for weighted sums of negatively dependent random variables under sub-linear expectations, AIMS Mathematics, 7 (2022), 19998–20019. https://doi.org/10.3934/math.20221094 doi: 10.3934/math.20221094
    [8] M. Z. Xu, X. H. Kong, Note on complete convergence and complete moment convergence for negatively dependent random variables under sub-linear expectations, AIMS Mathematics, 8 (2023), 8504–8521. https://doi.org/10.3934/math.2023428 doi: 10.3934/math.2023428
    [9] L. X. Zhang, Donsker's invariance principle under the sub-linear expectation with an application to Chung's law of the iterated logarithm, Commun. Math. Stat., 3 (2015), 187–214. https://doi.org/10.1007/s40304-015-0055-0 doi: 10.1007/s40304-015-0055-0
    [10] J. P. Xu, L. X. Zhang, Three series theorem for independent random variables under sub-linear expectations with applications, Acta Math. Sin., English Ser., 35 (2019), 172–184. https://doi.org/10.1007/s10114-018-7508-9 doi: 10.1007/s10114-018-7508-9
    [11] J. P. Xu, L. X. Zhang, The law of logarithm for arrays of random variables under sub-linear expectations, Acta Math. Appl. Sin. Engl. Ser., 36 (2020), 670–688. https://doi.org/10.1007/s10255-020-0958-8 doi: 10.1007/s10255-020-0958-8
    [12] Q. Y. Wu, Y. Y. Jiang, Strong law of large numbers and Chover's law of the iterated logarithm under sub-linear expectations, J. Math. Anal. Appl., 460 (2018), 252–270. https://doi.org/10.1016/j.jmaa.2017.11.053 doi: 10.1016/j.jmaa.2017.11.053
    [13] L. X. Zhang, J. H. Lin, Marcinkiewicz's strong law of large numbers for nonlinear expectations, Stat. Probab. Lett., 137 (2018), 269–276. https://doi.org/10.1016/j.spl.2018.01.022 doi: 10.1016/j.spl.2018.01.022
    [14] H. Y. Zhong, Q. Y. Wu, Complete convergence and complete moment convergence for weighted sums of extended negatively dependent random variables under sub-linear expectation, J. Inequal. Appl., 2017 (2017), 261. https://doi.org/10.1186/s13660-017-1538-1 doi: 10.1186/s13660-017-1538-1
    [15] F. Hu, Z. J. Chen, D. F. Zhang, How big are the increments of G-Brownian motion, Sci. China Math., 57 (2014), 1687–1700. https://doi.org/10.1007/s11425-014-4816-0 doi: 10.1007/s11425-014-4816-0
    [16] F. Q. Gao, M. Z. Xu, Large deviations and moderate deviations for independent random variables under sublinear expectations, Sci. China Math., 41 (2011), 337–352. https://doi.org/10.1360/012009-879 doi: 10.1360/012009-879
    [17] A. Kuczmaszewska, Complete convergence for widely acceptable random variables under the sublinear expectations, J. Math. Anal. Appl., 484 (2020), 123662. https://doi.org/10.1016/j.jmaa.2019.123662 doi: 10.1016/j.jmaa.2019.123662
    [18] Z. J. Chen, Strong laws of large numbers for sub-linear expectations, Sci. China Math., 59 (2016), 945–954. https://doi.org/10.1007/s11425-015-5095-0 doi: 10.1007/s11425-015-5095-0
    [19] L. X. Zhang, On the laws of the iterated logarithm under sub-linear expectations, PUQR, 6 (2021), 409–460. https://doi.org/10.3934/puqr.2021020 doi: 10.3934/puqr.2021020
    [20] X. C. Chen, Q. Y. Wu, Complete convergence and complete integral convergence of partial sums for moving average process under sub-linear expectations, AIMS Mathematics, 7 (2022), 9694–9715. https://doi.org/10.3934/math.2022540 doi: 10.3934/math.2022540
    [21] M. Z. Xu, K. Cheng, Convergence for sums of iid random variables under sublinear expectations, J. Inequal. Appl., 2021 (2021), 157. https://doi.org/10.1186/s13660-021-02692-x doi: 10.1186/s13660-021-02692-x
    [22] M. Z. Xu, K. Cheng, How small are the increments of G-Brownian motion, Stat. Probab. Lett., 186 (2022), 109464. https://doi.org/10.1016/j.spl.2022.109464 doi: 10.1016/j.spl.2022.109464
    [23] M. Z. Xu, K. Cheng, W. K. Yu, Convergence of linear processes generated by negatively dependent random variables under sub-linear expectations, J. Inequal. Appl., 2023 (2023), 77. https://doi.org/10.1186/s13660-023-02990-6 doi: 10.1186/s13660-023-02990-6
    [24] M. Z. Xu, Complete convergence of moving average processes produced by negatively dependent random variables under sub-linear expectations, AIMS Mathematics, 8 (2023), 17067–17080. https://doi.org/10.3934/math.2023871 doi: 10.3934/math.2023871
    [25] M. Z. Xu, Complete convergence and complete moment convergence for maximal weighted sums of extended negatively dependent random variables under sub-linear expectations, AIMS Mathematics, 8 (2023), 19442–19460. https://doi.org/10.3934/math.2023992 doi: 10.3934/math.2023992
    [26] M. L. Guo, J. J. Dai, D. J. Zhu, Complete moment convergence of moving average processes under negative association assumptions, Math. Appl. (Wuhan), 25 (2012), 118–125.
    [27] S. M. Hosseini, A. Nezakati, Complete moment convergence for the dependent linear processes with random coefficients, Acta Math. Sin., English Ser., 35 (2019), 1321–1333. https://doi.org/10.1007/s10114-019-8205-z doi: 10.1007/s10114-019-8205-z
    [28] Y. X. Li, L. X. Zhang, Complete moment convergence of moving-average processes under dependence assumptions, Stat. Probab. Lett., 70 (2004), 191–197. https://doi.org/10.1016/j.spl.2004.10.003 doi: 10.1016/j.spl.2004.10.003
    [29] M. S. Hu, Explicit solutions of the G-heat equation for a class of initial conditions, Nonlinear Anal.: Theory, Methods Appl., 75 (2012), 6588–6595. https://doi.org/10.1016/j.na.2012.08.002 doi: 10.1016/j.na.2012.08.002
  • This article has been cited by:

    1. Mingzhou Xu, Xuhang Kong, Complete qth moment convergence of moving average processes for m-widely acceptable random variables under sub-linear expectations, 2024, 214, 01677152, 110203, 10.1016/j.spl.2024.110203
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1144) PDF downloads(76) Cited by(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog