Loading [MathJax]/jax/output/SVG/jax.js
Research article

Complete convergence for weighted sums of negatively dependent random variables under sub-linear expectations

  • Received: 03 July 2022 Revised: 25 August 2022 Accepted: 05 September 2022 Published: 13 September 2022
  • MSC : 60F05, 60F15

  • In this article, we study complete convergence and complete moment convergence for negatively dependent random variables under sub-linear expectations. The results obtained in sub-linear expectation spaces extend the corresponding ones in probability space.

    Citation: Mingzhou Xu, Kun Cheng, Wangke Yu. Complete convergence for weighted sums of negatively dependent random variables under sub-linear expectations[J]. AIMS Mathematics, 2022, 7(11): 19998-20019. doi: 10.3934/math.20221094

    Related Papers:

    [1] Lunyi Liu, Qunying Wu . Complete integral convergence for weighted sums of negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(9): 22319-22337. doi: 10.3934/math.20231138
    [2] Mingzhou Xu, Xuhang Kong . Note on complete convergence and complete moment convergence for negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(4): 8504-8521. doi: 10.3934/math.2023428
    [3] Mingzhou Xu . Complete convergence and complete moment convergence for maximal weighted sums of extended negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(8): 19442-19460. doi: 10.3934/math.2023992
    [4] Shuyan Li, Qunying Wu . Complete integration convergence for arrays of rowwise extended negatively dependent random variables under the sub-linear expectations. AIMS Mathematics, 2021, 6(11): 12166-12181. doi: 10.3934/math.2021706
    [5] He Dong, Xili Tan, Yong Zhang . Complete convergence and complete integration convergence for weighted sums of arrays of rowwise $ m $-END under sub-linear expectations space. AIMS Mathematics, 2023, 8(3): 6705-6724. doi: 10.3934/math.2023340
    [6] Mingzhou Xu . On the complete moment convergence of moving average processes generated by negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2024, 9(2): 3369-3385. doi: 10.3934/math.2024165
    [7] Chengcheng Jia, Qunying Wu . Complete convergence and complete integral convergence for weighted sums of widely acceptable random variables under the sub-linear expectations. AIMS Mathematics, 2022, 7(5): 8430-8448. doi: 10.3934/math.2022470
    [8] Mingzhou Xu . Complete convergence of moving average processes produced by negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(7): 17067-17080. doi: 10.3934/math.2023871
    [9] Xiaocong Chen, Qunying Wu . Complete convergence and complete integral convergence of partial sums for moving average process under sub-linear expectations. AIMS Mathematics, 2022, 7(6): 9694-9715. doi: 10.3934/math.2022540
    [10] Baozhen Wang, Qunying Wu . Almost sure convergence for a class of dependent random variables under sub-linear expectations. AIMS Mathematics, 2024, 9(7): 17259-17275. doi: 10.3934/math.2024838
  • In this article, we study complete convergence and complete moment convergence for negatively dependent random variables under sub-linear expectations. The results obtained in sub-linear expectation spaces extend the corresponding ones in probability space.



    Peng [1,2] firstly introduced the important concepts of the sub-linear expectations space to study the uncertainty in probability. Inspired by the important works of Peng [1,2], many scholars try to investigate the results under sub-linear expectations space, extending the corresponding ones in classic probability space. Zhang [3,4,5] established Donsker's invariance principle, exponential inequalities and Rosenthal's inequality under sub-linear expectations. Wu [6] obtained precise asymptotics for complete integral convergence under sub-linear expectations. Under sub-linear expectations, Xu and Cheng [7] investigated how small the increments of G-Brownian motion are. For more limit theorems under sub-linear expectations, the interested readers could refer to Xu and Zhang [8,9], Wu and Jiang [10], Zhang and Lin [11], Zhong and Wu [12], Hu and Yang [13], Chen [14], Chen and Wu [15], Zhang [16], Hu, Chen and Zhang [17], Gao and Xu [18], Kuczmaszewska [19], Xu and Cheng [7,20,21,22,23] and references therein.

    In classic probability space, Hsu and Robbins [24] introduced concept of complete convergence, Chow [25] investigated complete moment convergence for independent random variables, Zhang and Ding [26] proved the complete moment convergence of the partial sums of moving average processes under some proper assumptions, Meng et al. [27] established complete convergence and complete moment convergence for weighted sums of extended negatively dependent random variables. For references on complete moment convergence in linear expectation space, the interested reader could refer to Ko [28], Meng et al. [29], Hosseini and Nezakati [30] and references therein. Encouraged by the work of Meng et al. [27], since the fact that X is independent to Y under sub-linear expectations implies that X is negatively dependent to Y under sub-linear expectations, we try to study the complete convergence and complete moment convergence for weighted sums of identically distributed, negatively dependent random variables under sub-linear expectations, which extends the corresponding results in Meng et al. [27].

    We organize the remainders of this paper as follows. We give necessary basic notions, concepts and relevant properties, and present necessary lemmas under sub-linear expectations in the next section. In Section 3, we give our main results, Theorems 3.1 and 3.2, the proofs of which are presented in Section 4.

    As in Xu and Cheng [22], we use similar notations as in the work by Peng [2], Chen [14], Zhang [5]. Suppose that (Ω,F) is a given measurable space. Assume that H is a subset of all random variables on (Ω,F) such that IAH (cf. Chen [14]), where I(A) or IA represent the indicator function of A throughout this paper, AF, and X1,,XnH implies φ(X1,,Xn)H for each φCl,Lip(Rn), where Cl,Lip(Rn) represents the linear space of (local lipschitz) function φ fulfilling

    |φ(x)φ(y)|C(1+|x|m+|y|m)(|xy|),x,yRn

    for some C>0, mN both depending on φ.

    Definition 2.1. A sub-linear expectation E on H is a functional E:HˉR:=[,] fulfilling the following properties: for all X,YH, we have

    (a) Monotonicity: If XY, then E[X]E[Y];

    (b) Constant preserving: E[c]=c, cR;

    (c) Positive homogeneity: E[λX]=λE[X], λ0;

    (d) Sub-additivity: E[X+Y]E[X]+E[Y] whenever E[X]+E[Y] is not of the form or +.

    Remark 2.1. In (c) of Definition 2.1, positive homogeneity could be understood by Theorem 1.2.1 of Peng [2], which says that a sub-linear expectation could be represented as a supremum of linear expectations. In Theorem 3.1, E[X]=E[X]=0 could imply that E[αX]=αE[X] for all αR, but E[X]=E[X]=0 could not imply that E[αXβ]=αE[Xβ] for all αR and β1. By Lemma 2.1, in order to justify E[X]=E[X]=0 in Theorem 3.1, we should have E[Z+X]=E[ZX], for all ZH.

    A set function V:F[0,1] is named to be a capacity if

    (a)V()=0, V(Ω)=1;

    (b)V(A)V(B), AB, A,BF.

    A capacity V is called sub-additive if V(A+B)V(A)+V(B), A,BF.

    In this article, given a sub-linear expectation space (Ω,H,E), write V(A):=inf{E[ξ]:IAξ,ξH}=E[IA], AF (see (2.3) and the definitions of V above (2.3) in Zhang [4]). V is a sub-additive capacity. Define

    CV(X):=0V(X>x)dx+0(V(X>x)1)dx.

    Suppose that X=(X1,,Xm), XiH and Y=(Y1,,Yn), YiH are two random vectors on (Ω,H,E). Y is called to be negatively dependent to X, if for each function ψ1Cl,Lip(Rm), ψ2Cl,Lip(Rn), we have E[ψ1(X)ψ2(Y)]E[ψ1(X)]E[ψ2(Y)] whenever ψ1(X)0, E[ψ2(Y)]0, E[ψ1(X)ψ2(Y)]<, E[|ψ1(X)|]<, E[|ψ2(Y)|]<, and either ψ1 and ψ2 are coordinatewise nondecreasing or ψ1 and ψ2 are coordinatewise nonincreasing (see Definition 2.3 of Zhang [4], Definition 1.5 of Zhang [5], Definition 2.5 in Chen [14]). {Xn}n=1 is named a sequence of negatively dependent random variables, if Xn+1 is negatively dependent to (X1,,Xn) for each n1.

    Suppose that X1 and X2 are two n-dimensional random vectors defined, respectively, in sub-linear expectation spaces (Ω1,H1,E1) and (Ω2,H2,E2). They are named identically distributed if for every Borel-measurable function ψ such that ψ(X1)H1,ψ(X2)H2,

    E1[ψ(X1)]=E2[ψ(X2)], 

    whenever the sub-linear expectations are finite. {Xn}n=1 is named to be identically distributed if for each i1, Xi and X1 are identically distributed.

    In this sequel we assume that E is countably sub-additive, i.e., E(X)n=1E(Xn), whenever Xn=1Xn, X,XnH, and X0, Xn0, n=1,2,. Let C stand for a positive constant which may differ from place to place.

    As discussed in Zhang [5], by the definition of negative dependence, if X1,X2,,Xn are negatively dependent random variables and f1, f2,,fn are all non increasing (or non decreasing) functions, then f1(X1), f2(X2),,fn(Xn) are still negatively dependent random variables.

    We cite the following lemmas under sub-linear expectations.

    Lemma 2.1. (See Proposition 1.3.7 of Peng [2]) Under sub-linear expectation space (Ω,H,E), if X,YH, E[Y]=E[Y]=0, then E[X+αY]=E[X], for any αR.

    Lemma 2.2. (See Lemma 4.5 (iii) of Zhang [4]) If E is countably sub-additive under sub-linear expectation space (Ω,H,E), then for XH,

    E|X|CV(|X|).

    Lemma 2.3. (See Theorem 2.1 and its proof of Zhang [5]) Assume that p>1 and {Xn;n1} is a sequence of negatively dependent random varables under sub-linear expectation space (Ω,H,E). Then for each n1, there exists a positive constant C=C(p) depending on p such that for 1<p2,

    E|ni=1Xi|pC[ni=1E|Xi|p+(ni=1[|E(Xi)|+|E(Xi)|])p], (2.1)

    and for p>2,

    E|ni=1Xi|pC{ni=1E|Xi|p+(ni=1EX2i)p/2+(ni=1[|E(Xi)|+|E(Xi)|])p}. (2.2)

    Proof. For reader's convenience, here we give the detailed proof. We first prove (2.1). Set Tk=max{Xk,Xk+Xk1,,Xk++X1}, ˘Tn=max{|Xn|,|Xn+Xn1|,,|Xn++X1|}. Since T+k+Xk+1++XnTn, T+k2˘Tn. Substituting x=Xk and y=T+k1, k=n,,2 to the following elementary inequality

    |x+y|p22p|x|p+|y|p+px|y|p1sgn(y),1<p2

    results in

    |Tn|p22p|Xn|p+(T+n1)p+pXn(T+n1)p122p|Xn|p+|Tn1|p+pXn(T+n1)p122pni=1|Xi|p+pni=2Xi(T+i1)p1,

    which by the definition of negative dependence and Hölder inequality under sub-linear expectations (see Proposition 1.4.2 of Peng [2]), implies that

    E|Tn|p22pE[ni=1|Xn|p]+pni=2E[Xi(T+i1)p1]22pE[ni=1|Xn|p]+p2p1ni=2(E[Xi])+(E[˘Tpn])11/p.

    Similarly,

    E|max{Xn,XnXn1,,XnX1}|p22pE[ni=1|Xn|p]+p2p1ni=2(E[Xi])(E[˘Tpn])11/p.

    Therefore

    E|˘Tpn|23pE[ni=1|Xn|p]+p2pni=1[(E[Xi])++(E[Xi])](E[˘Tpn])11/p

    which implies that (2.1) holds.

    Next, by (2.4) of Zhang [5] and its proof, we see that for p>2

    E[max1kn|Sk|p]Cp{ni=1E[|Xn|p]+(ni=1E[|Xi|2])p/2+(ni=1[(E[Xi])+(E[Xi])+])p}, (2.3)

    which implies that (2.2) holds.

    By Lemma 2.3 and the similar argument as in Theorem 2.3.1 in Stout [31], we could obtain the following lemma.

    Lemma 2.4. Assume that q>1 and {Xn;n1} is a sequence of negatively dependent random varables under sub-linear expectation space (Ω,H,E). Then for each n1, there exists a positive constant C=C(q) depending only on q such that 1<q2,

    E(max1jn|ji=1Xi|q)C(logn)q{ni=1E|Xi|q+(ni=1[|E(Xi)|+|E(Xi)|])q}, (2.4)

    and for q>2,

    E(max1jn|ji=1Xi|q)C(logn)q{ni=1E|Xi|q+(ni=1EX2i)q/2+(ni=1[|E(Xi)|+|E(Xi)|])q}. (2.5)

    Proof. For reader's convenience, here we also give detailed proof. We only prove (2.4), since (2.5) is obvious from (2.3). We first prove (2.4) for n=2k, k being an any positive integer. To avoid confusing the main idea, we just give the proof for k=6. Let Xr,s=si=r+1Xi for 0r<s26. We consider the following collections of Xr,s:

    {X0,64}{X0,32,X32,64}{X0,16,X16,32,X32,48,X48,64}{X0,8,,X56,64}{X0,4,,X60,64}{X0,2,,X62,64}{X0,1,,X63,64}.

    There are k+1=7 collections. We choose 1i26, and expand Si, by using the terms of this expansion from the collections above and using the minimal possible number of terms in the expansion. Clearly at most one term is needed from each collections. As an example,

    X0,62=X0,32+X32,48+X48,56+X56,60+X60,62.

    Hence each expansion has at most k+1=7 terms in it. Denote the expansion of Si by

    Si=hj=1Xij1,ij,(h7).

    It follows from Hölder inequality that

    |Si|q7q1hj=1(|Xij1,ij|)q.

    Now

    hj=1(|Xij1,ij|)q|X0,64|q+(|X0,32|q+|X32,64|q)+(|X0,16|q+|X16,32|q+|X32,48|q+|X48,64|q)++(|X0,1|q+|X1,2|q++|X63,64|q).

    Hence,

    max1i26|Si|q7q1[|X0,64|q+(|X0,32|q+|X32,64|q)+(|X0,16|q+|X16,32|q+|X32,48|q+|X48,64|q)++(|X0,1|q+|X1,2|q++|X63,64|q)].

    There are k+1=7 parenthetical expressions inside square brackets. By the Cr inequality, we see that

    mi=1|ξi|q(mi=1|ξi|)q,ξiR,m1,

    which implies

    (32i=1[|E(Xi)|+|E(Xi)|])q+(64i=33[|E(Xi)|+|E(Xi)|])q(26i=1[|E(Xi)|+|E(Xi)|])q,(16i=1[|E(Xi)|+|E(Xi)|])q++(64i=49[|E(Xi)|+|E(Xi)|])q(26i=1[|E(Xi)|+|E(Xi)|])q,[|E(X1)|+|E(X1)|]q++[|E(X64)|+|E(X64)|]q(26i=1[|E(Xi)|+|E(Xi)|])q.

    By (2.1) and the above discussion,

    E[max1i26|Si|q]7q17Cq[26i=1E|Xi|q+(26i=1[|E(Xi)|+|E(Xi)|])q].

    Using an appropriate notion, the above discussion extended to any k1 implies

    E[max1i2k|Si|q](k+1)qCq[2ki=1E|Xi|q+(2ki=1[|E(Xi)|+|E(Xi)|])q]. (2.6)

    Given an n such that n2k for any k1, choose k satisfying 2k1<n<2k and redefine Xi=0 if n<i2k. By (2.6), we see that

    E[max1in|Si|q](k+1)qCq[ni=1E|Xi|q+(ni=1[|E(Xi)|+|E(Xi)|])q].

    Since 2k1<n implies (k+1)q[log(4n)/log2]q, (2.4) follows.

    Our main results are the following.

    Theorem 3.1. Suppose α>12, αp>1 and {Xn;n1} is a sequence of negatively dependent random variables, identically distributed as X under sub-linear expectation space (Ω,H,E). Assume that E(X)=E(X)=0 while p>1. Suppose that {ani;1in,n1} is an array of real numbers being all nonnegative or all non-positive such that

    ni=1|ani|p=O(nδ)for0<δ<1. (3.1)

    Let CV(|X|p)<. Then for any ε>0,

    n=1nαp2V{max1jn|ji=1aniXi|>εnα}<. (3.2)

    Theorem 3.2. Suppose p>1, α12, αp>1 and {Xn;n1} is a sequence of negatively dependent random variables, identically distributed as X under sub-linear expectation space (Ω,H,E). Assume that E(X)=E(X)=0. Suppose that {ani;1in,n1} is an array of real numbers being all nonnegative or all non-positive such that (3.1) holds. Let CV(|X|p)<. Then for any ε>0,

    n=1nαp2αCV(max1jn|ji=1aniXi|εnα)+<. (3.3)

    Remark 3.1. Under the assumptions of Theorem 3.2, we see that for all ε>0,

    >n=1nαp2αCV(max1jn|ji=1aniXi|εnα)+=n=1nαp2αεnα0V(max1jn|ji=1aniXi|εnα>t)dt+n=1nαp2αεnαV(max1jn|ji=1aniXi|εnα>t)dtCn=1nαp2V(max1jn|ji=1aniXi|>2εnα). (3.4)

    By (3.4), we can conclude that the complete moment convergence implies the complete convergence.

    Proof. For all 1in, n1, write

    Yni=nαI(aniXi<nα)+aniXiI(|aniXi|nα)+nαI(aniXi>nα),Tnj=ji=1(YniEYni),j=1,2,,n.

    We easily observe that for all ε>0,

    {max1jn|ji=1aniXi|>εnα}{max1jn|anjXj|>nα}{max1jn|ji=1Yni|>εnα}, (4.1)

    which results in

    V{max1jn|ji=1aniXi|>εnα}V(max1jn|anjXj|>nα)+V(max1jn|ji=1Yni|>εnα)nj=1V(|anjXj|>nα)+V(max1jn|Tnj|>εnαmax1jn|ji=1EYni|). (4.2)

    Firstly, we will establish that

    nαmax1jn|ji=1EYni|0, asn. (4.3)

    We study the following three cases.

    (ⅰ) If 12<α1, then p>1. By EX=E(X)=0,|E(XY)|E|XY|,CV(|X|p)<, Lemmas 1 and 2, we can see that

    nαmax1jn|ji=1EYni|nαni=1|EYni|nαni=1|E[YnianiXi]|nαni=1E|YnianiXi|Cni=1E|aniX|pnαpCnαpni=1|ani|pE|X|pCnδαpCV(|X|p)0, asn. (4.4)

    (ⅱ) If α>1, p<1, then by CV(|X|p)<, and Lemma 2.2, we see that

    nαmax1jn|ji=1EYni|nαni=1|EYni|nαni=1|EaniXiI(|aniXi|nα)|+Cni=1V(|aniXi|>nα)nαni=1|EaniXI(|aniX|nα)|+Cni=1V(|aniX|>nα)Cni=1E(|aniX|p)nαp+ni=1E(|aniX|p)nαpCnαpni=1|ani|pE|X|pCnδαpCV(|X|p)0, asn. (4.5)

    (ⅲ) If α>1, p1, then by E|X|(E|X|p)1/p(CV(|X|p))1/p<, Markov inequality under sub-linear expectations, Hölder inequality, we see that

    nαmax1jn|ji=1EYni|nαni=1|EYni|nαni=1E|aniXiI(|aniXi|nα)|+ni=1V(|aniXi|>nα)Cnαni=1|ani|+Cnαni=1|ani|Cnα(ni=1|ani|p)1/pn11/pCn1α(1δ)/p0, asn. (4.6)

    Combining (4.4)–(4.6) results in (4.3) immediately. Hence, for n sufficiently large,

    V(max1jn|ji=1aniXi|>εnα)nj=1V(|anjXj|>nα)+V(max1jn|Tnj|>εnα2). (4.7)

    To prove (3.2), we only need to establish that

    I:=n=1nαp2ni=1V(|aniXi|>nα)< (4.8)

    and

    II:=n=1nαp2V(max1jn|Tnj|>εnα2)<. (4.9)

    For I, by Markov inequality under sub-linear expectations, and Lemma 2.2, we obtain

    I=n=1nαp2ni=1V(|aniX|>nα)Cn=1n2ni=1E|aniX|pCn=1nδ2CV(|X|p)<. (4.10)

    As pointed before Lemma 2.2, we see that {YniEYni;1in,n1} is also a sequence of negatively dependent random variables. By Lemma 2.4, Markov inequality under sub-linear expectations, and the Cr inequality, we conclude that for q>2,

    IICn=1nαp2nαqE(max1jn|ji=1(YniEYni)|q)Cn=1nαp2αq(logn)q(ni=1E|YniEYni|q+(ni=1E|YniEYni|2)q/2+(ni=1[|E(Yni)|+|E(Yni)|])q)Cn=1nαp2αq(logn)qni=1E|Yni|q+Cn=1nαp2αq(logn)q(ni=1E|Yni|2)q/2+Cn=1nαp2αq(logn)q(ni=1[|E(Yni)|+|E(Yni)|])q=:II1+II2+II3. (4.11)

    Taking q>max{2,p}, by the Cr inequality, Markov inequality under sub-linear expectations, and Lemma 2.2, we have

    II1Cn=1nαp2αq(logn)qni=1[E|aniXi|qI(|aniXi|nα)+nαqV(|aniXi|>nα)]=Cn=1nαp2αq(logn)qni=1E|aniX|qI(|aniX|nα)+Cn=1nαp2(logn)qni=1V(|aniX|>nα)Cn=1nαp2(logn)qni=1E|aniX|qI(|aniX|nα)nαq+Cn=1n2(logn)qni=1E|aniX|pCn=1nαp2(logn)qni=1E|aniX|pnαp+Cn=1n2(logn)qni=1|ani|pCV(|X|p)Cn=1n2(logn)qni=1|ani|pCV(|X|p)+Cn=1nδ2(logn)qCn=1nδ2(logn)q<. (4.12)

    For II2, we study the following cases.

    (ⅰ) If p2, observe that ni=1a2ni(ni=1|ani|p)2/pn12/pn12(1δ)/p. Taking q>max{2,2p(αp1)2αpp+2(1δ)}, by the Cr inequality, EX2(E(|X|p))1/p(CV(|X|p))1/p<, we see that

    II2Cn=1nαp2αq(logn)q(ni=1[E|aniXi|2I(|aniXi|nα)+n2αV(|aniXi|>nα)])q/2Cn=1nαp2αq(logn)q(ni=1E|aniX|2I(|aniX|nα))q/2+Cn=1nαp2(logn)q(ni=1V(|aniX|>nα))q/2Cn=1nαp2αq(logn)q(ni=1a2ni)q/2+Cn=1nαp2αq(logn)q(ni=1a2ni)q/2Cn=1nαp2αq(logn)q(n12(1δ)/p)q/2Cn=1nαp2αq+q2(1δ)qp(logn)q<. (4.13)

    (ⅱ) If p<2, we take q>2(αp1)/(αpδ). By the Cr inequality, Markov inequality under sub-linear expectations, and Lemma 2.2, we see that

    (4.14)

    For II3, we study the following cases.

    (ⅰ) If 12<α1, then p>1. Taking q>αp1αpδ, by E(X)=E(X)=0, Lemmas 1 and 2, we see that

    II3Cn=1nαp2αq(logn)q[ni=1[|E[Yni+aniXi]|+|E[YnianiXi]|]]qCn=1nαp2αq(logn)q[ni=1[E[|Yni+aniXi|]+E[|YnianiXi|]]]qCn=1nαp2αq(logn)q[ni=1[E|aniX|pnα(p1)+E|aniX|pnα(p1)]]qCn=1nαp2αq(logn)qnα(p1)q+δq(CV(|X|p))qCn=1nαp2αpq+δq(logn)q<. (4.15)

    (ⅱ) If α>1, p<1, taking q>αp1αpδ, by Lemma 2.2, we obtain

    II3Cn=1nαp2αq(logn)q[ni=1[E|aniX|I{|aniX|nα}+nαV(|aniX|>nα)]]qCn=1nαp2(logn)q(ni=1E|aniX|I{|aniX|nα}nα)q+Cn=1nαp2(logn)q(nαpni=1E|aniX|p)qCn=1nαp2(logn)q(ni=1E|aniX|pnαp)qCn=1nαp2(logn)q(nαpni=1|ani|p)q(CV(|X|p))qCn=1nαp2+q(δαp)(logn)q<.

    (ⅲ) If α>1, p>1, then E|X|(E|X|p)1/p(CV(|X|p))1/p<. We take q>(αp1)pαpp+(1δ). Hence by Cr inequality, Markov inequality under sub-linear expectations, and Hölder inequality, we see that

    II3Cn=1nαp2αq(logn)q[ni=1[E|aniX|I{|aniX|nα}+nαV(|aniX|>nα)]]qCn=1nαp2αq(logn)q[ni=1[|ani|+E|aniX|]]qCn=1nαp2αq(logn)q[ni=1|ani|]qCn=1nαp2αq(logn)q[(ni=1|ani|p)1/pn11/p]qCn=1nαp2αq+q(1δ)qp(logn)q<.

    Hence, the proof of Theorem 3.1 is finished.

    Proof. For all ε>0 and any t>0, we see that

    n=1nαp2αCV(max1jn|ji=1aniXi|εnα)+=n=1nαp2α0V(max1jn|ji=1aniXi|εnα>t)dt=n=1nαp2αnα0V(max1jn|ji=1aniXi|>εnα+t)dt+n=1nαp2αnαV(max1jn|ji=1aniXi|>εnα+t)dtn=1nαp2αnα0V(max1jn|ji=1aniXi|>εnα)dt+n=1nαp2αnαV(max1jn|ji=1aniXi|>t)dtn=1nαp2V(max1jn|ji=1aniXi|>εnα)+n=1nαp2αnαV(max1jn|ji=1aniXi|>t)dt=:III1+III2. (4.16)

    By Theorem 3.1, we conclude that III1<. Therefore, it is enough to establish III2<. Without loss of restriction, assume that ani0. For all 1in, n1, tnα, write

    Yni=tI(aniXi<t)+aniXiI(|aniXit|)+tI(aniXi>t),Zni=aniXiYni=(aniXi+t)I(aniXi<t)+(aniXit)I(aniXi>t),Tnj=ji=1(YniEYni),j=1,2,,n.

    We easily see that for all ε>0,

    V(max1jn|ji=1aniXi|>t)ni=1V(|aniXi|>t)+V(max1jn|ji=1Yni|>t), (4.17)

    which results in

    III2:=n=1nαp2αnαV(max1jn|ji=1aniXi|>t)dtn=1nαp2αni=1nαV(|aniXi|>t)dt+n=1nαp2αnαV(max1jn|ji=1Yni|>t)dt=:III21+III22. (4.18)

    For III21, by p>1, and Lemma 2.2, we obtain

    III21:=n=1nαp2αni=1nαV(|aniXi|>t)dt=n=1nαp2αni=1nαpV(|aniX|p>s)1ps1p1dsCn=1nαp2αni=1nαpV(|aniX|p>s)(nαp)1p1dsCn=1n2ni=1CV(|aniX|p)=Cn=1n2ni=1|ani|pCV(|X|p)Cn=1nδ2<. (4.19)

    For III22, we firstly establish that

    suptnα1tmax1jn|ji=1EYni|0, asn. (4.20)

    For 1in, n1 and p>1, by EXn=E(Xn)=0 and Lemma 2.1, we see that EYni=E(Zni). If aniXi>t, 0<Zni=aniXit<aniXi. If aniXi<t, aniXi<Zni=aniXi+t0. Hence |Zni||aniXi|I(|aniXi|>t). Then, by Lemma 2.2, we have

    suptnα1tmax1jn|ji=1EYni|=suptnα1tmax1jn|ji=1E(Zni)|Csuptnα1tni=1E|Zni|Csuptnα1tni=1E|aniXi|I(|aniXi|>t)Cni=1E|aniX|I(|aniX|>nα)nαCni=1E|aniX|pnαpCnδαpCV(|X|p)0, asn. (4.21)

    Hence, while n is large enough, for tnα,

    max1jn|ji=1EYni|t2, (4.22)

    which results in

    V(max1jn|ji=1Yni|>t)V(max1jn|Tnj|>t2). (4.23)

    In the following, we present III22<, for 1<p2 and p>2.

    (ⅰ) If 1<p2, by (4.23), Lemma 2.4, Markov inequality under sub-linear expectations, and the Cr inequality, we obtain

    III22n=1nαp2αnαV(max1jn|Tnj|>t2)dtCn=1nαp2αnαt2E(max1jn|Tnj|2)dtCn=1nαp2αnαt2(logn)2(ni=1E|YniEYni|2+(ni=1[|E(Yni)|+|E(Yni)|])2)dtCn=1nαp2α(logn)2nαt2ni=1E|aniXi|2I(|aniXi|nα)dt+Cn=1nαp2α(logn)2nαt2ni=1E|aniXi|2I(nα<|aniXi|t)dt+Cn=1nαp2α(logn)2ni=1nαV(|aniXi|>t)dt+Cn=1nαp2α(logn)2nαt2(ni=1[|E(Yni)|+|E(Yni)|])2dt=:III221+III222+III223+III224. (4.24)

    For III221, by 1<p2, we see that

    III221Cn=1nαp2(logn)2ni=1E|aniXi|2I(|aniXi|nα)n2α=Cn=1nαp2(logn)2ni=1E|aniX|2I(|aniX|nα)n2αCn=1nαp2(logn)2ni=1E|aniX|pI(|aniX|nα)nαpCn=1nαp2(logn)2nαp+δCV(|X|p)Cn=1nδ2(logn)2<. (4.25)

    For III222, by 1<p2, by Markov inequality under sub-linear expectations, and Lemma 2.2, we see that

    (4.26)

    For III223<, by the proof of III21<, we can see that III223<. For III224, by 1<p2, E(X)=E(X)=0, and Lemma 2.1, we obtain

    III224Cn=1nαp2α(logn)2nα(ni=1[|E[Yni+aniXi]|+|E[YnianiXi]|])2Cn=1nαp2α(logn)2nα(ni=1E|aniX|I{|aniX|>nα})2Cn=1nαp2α(logn)2nα(ni=1E|aniX|pnα(p1))2Cn=1n2δαp2(logn)2(CV(|X|p))2<.

    Therefore, we conclude that III22< for 1<p2.

    (ⅱ) If p>2, by (4.23), E(X)=E(X)=0, Markov inequality under sub-linear expectations, the Cr inequality, and Lemma 2.4 (for q>2), we see that

    III22n=1nαp2αnαV(max1jn|Tnj|>t2)dtCn=1nαp2αnαtqE(max1jn|ji=1(YniEYni)|q)dtCn=1nαp2α(logn)qnαtq(ni=1E|YniEYni|q+(ni=1E(YniEYni)2)q/2+(ni=1[|E(Yni)|+|E(Yni)|])q)dtCn=1nαp2α(logn)qni=1nαtqE|Yni|qdt+n=1nαp2α(logn)qnαtq(ni=1EYni2)q/2dt+n=1nαp2α(logn)qnαtq(ni=1[|E(Yni)|+|E(Yni)|])qdt=:IV1+IV2+IV3. (4.27)

    For IV1, we obtain

    IV1=Cn=1nαp2α(logn)qni=1nαtqE|aniX|qI(|aniX|nα)dt+Cn=1nαp2α(logn)qni=1nαtqE|aniX|qI(nα<|aniX|t)dt+Cn=1nαp2α(logn)qni=1nαV(|aniX|>t)dt=:IV11+IV12+IV13. (4.28)

    By the similar proofs of III221< and III222< (with q in place of the exponent 2), we can see that IV11< and IV12<. Similarly, by the proof of III21<, we can see that IV13<.

    For IV2, we obtain

    IV2Cn=1nαp2α(logn)qnαtq(ni=1E|aniX|2I(|aniX|nα))q/2dt+Cn=1nαp2α(logn)qnαtq(ni=1E|aniX|2I(nα<|aniX|t))q/2dt+Cn=1nαp2α(logn)qnα(ni=1V(|aniX|>t))q/2dt=:IV21+IV22+IV23. (4.29)

    For IV21, taking q>max{2,2p(αp1)2αpp+2(1δ)}, by the Cr inequality, Jensen inequality under sub-linear expectations, and Lemma 2.2, we obtain

    IV21=Cn=1nαp2α(logn)qnαtq(ni=1E|aniX|2I(|aniX|nα))q/2dtCn=1nαp2α(logn)qnααq(ni=1E|aniX|2I(|aniX|nα))q/2Cn=1nαp2α(logn)qnααq(ni=1a2ni)q/2(E|X|p)q/pCn=1nαp2α(logn)qnααq(n12(1δ)/p)q/2(CV(|X|p))q/pCn=1nαp2αq+q2(1δ)qp(logn)q<. (4.30)

    For IV22, taking q>max{2,2(αp1)(αpδ)}, by Jensen inequality under sub-linear expectations, and Lemma 2.2, we have

    IV22=Cn=1nαp2α(logn)qnαtq(ni=1E|aniX|2I(nα<|aniX|t))q/2dtCn=1nαp2α(logn)q(ni=1E|aniX|2I(|aniX|>nα))q/2nαtqdtCn=1nαp2(logn)q(ni=1E|aniX|2I(|aniX|>nα)n2α)q/2Cn=1nαp2(logn)q(ni=1E|aniX|pI(|aniX|>nα)nαp)q/2Cn=1nαp2(logn)qnαpq/2nδq/2(CV(|X|p))q/2Cn=1nαp2αpq/2+δq/2(logn)q<. (4.31)

    For IV23, by Markov inequality under sub-linear expectations, and Lemma 2.2, we conclude that

    suptnαni=1V(|aniX|>t)ni=1V(|aniX|>nα)ni=1E|aniX|pnαpCnδαp0, asn. (4.32)

    Since tnα, for all n sufficiently large, we deduce that

    ni=1V(|aniX|>t)<1. (4.33)

    By (4.29), we obtain

    IV23=Cn=1nαp2α(logn)qnα(ni=1V(|aniX|>t))q/2dtCn=1nαp2α(logn)qnα(ni=1V(|aniX|>t))dtCn=1nδ2(logn)q<. (4.34)

    For IV3, taking q>max{αp1αpδ,2}, by E(X)=E(X)=0, Lemma 2.1, and Lemma 2.2, we see that

    IV3Cn=1nαp2α(logn)qn(q1)α(ni=1[|E(YnianiXi)|+|E(Yni+aniXi)|])qCn=1nαp2α(q1)α(logn)q(ni=1E|aniXi|pnα(p1))qCn=1nαp2α(q1)α(logn)qnαq(p1)+δq(CV(|X|p))qCn=1nαp2αqp+δq(logn)q<.

    Hence, the proof of Theorem 3.2 is finished.

    We have established the new results of complete convergence and complete moment convergence for weighted sums of negatively dependent random variables under sub-linear expectations. Theorems of this article are the extensions of convergence properties for weighted sums of extended negatively dependent random variables under classical probability space.

    This research was supported by Doctoral Scientific Research Starting Foundation of Jingdezhen Ceramic University (Nos.102/01003002031), Natural Science Foundation Program of Jiangxi Province 20202BABL211005, and National Natural Science Foundation of China (Nos. 61662037), Jiangxi Province Key S & T Cooperation Project (Nos. 20212BDH80021).

    All authors declare no conflict of interest in this paper.



    [1] S. Peng, G-expectation, G-Brownian motion and related stochastic calculus of Itô type, In: Stochastic analysis and applications, Berlin: Springer, 2007,541–561. https://doi.org/10.1007/978-3-540-70847-6_25
    [2] S. Peng, Nonlinear expectations and stochastic calculus under uncertainty, Berlin: Springer, 2019. https://doi.org/10.1007/978-3-662-59903-7
    [3] L. Zhang, Donsker's invariance principle under the sub-linear expectation with an application to Chung's law of the iterated logarithm, Commun. Math. Stat., 3 (2015), 187–214. https://doi.org/10.1007/s40304-015-0055-0 doi: 10.1007/s40304-015-0055-0
    [4] L. Zhang, Exponential inequalities under the sub-linear expectations with applications to laws of the iterated logarithm, Sci. China Math., 59 (2016), 2503–2526. https://doi.org/10.1007/s11425-016-0079-1 doi: 10.1007/s11425-016-0079-1
    [5] L. Zhang, Rosenthal's inequalities for independent and negatively dependent random variables under sub-linear expectations with applications, Sci. China Math., 59 (2016), 751–768. https://doi.org/10.1007/s11425-015-5105-2 doi: 10.1007/s11425-015-5105-2
    [6] Q. Wu, Precise asymptotics for complete integral convergence under sublinear expectations, Math. Probl. Eng., 2020 (2020), 3145935. https://doi.org/10.1155/2020/3145935 doi: 10.1155/2020/3145935
    [7] M. Xu, K. Cheng, How small are the increments of G-Brownian motion, Stat. Probabil. Lett., 186 (2022), 109464. https://doi.org/10.1016/j.spl.2022.109464 doi: 10.1016/j.spl.2022.109464
    [8] J. Xu, L. Zhang, Three series theorem for independent random variables under sub-linear expectations with applications, Acta Math. Appl. Sin. Engl. Ser., 35 (2019), 172–184. https://doi.org/10.1007/s10114-018-7508-9 doi: 10.1007/s10114-018-7508-9
    [9] J. Xu, L. Zhang, The law of logarithm for arrays of random variables under sub-linear expectations, Acta Math. Appl. Sin. Engl. Ser., 36 (2020), 670–688. https://doi.org/10.1007/s10255-020-0958-8 doi: 10.1007/s10255-020-0958-8
    [10] Q. Wu, Y. Jiang, Strong law of large numbers and Chover's law of the iterated logarithm under sub-linear expectations, J. Math. Anal. Appl., 460 (2018), 252–270. https://doi.org/10.1016/j.jmaa.2017.11.053 doi: 10.1016/j.jmaa.2017.11.053
    [11] L. Zhang, J. Lin, Marcinkiewicz's strong law of large numbers for nonlinear expectations, Stat. Probabil. Lett., 137 (2018), 269–276. https://doi.org/10.1016/j.spl.2018.01.022 doi: 10.1016/j.spl.2018.01.022
    [12] H. Zhong, Q. Wu, Complete convergence and complete moment convergence for weighted sums of extended negatively dependent random variables under sub-linear expectation, J. Inequal. Appl., 2017 (2017), 261. https://doi.org/10.1186/s13660-017-1538-1 doi: 10.1186/s13660-017-1538-1
    [13] Z. Hu, Y. Yang, Some inequalities and limit theorems under sublinear expectations, Acta Math. Appl. Sin. Engl. Ser., 33 (2017), 451–462. https://doi.org/10.1007/s10255-017-0673-2 doi: 10.1007/s10255-017-0673-2
    [14] Z. Chen, Strong laws of large numbers for sub-linear expectations, Sci. China Math., 59 (2016), 945–954. https://doi.org/10.1007/s11425-015-5095-0 doi: 10.1007/s11425-015-5095-0
    [15] X. Chen, Q. Wu, Complete convergence and complete integral convergence of partial sums for moving average process under sub-linear expectations, AIMS Mathematics, 7 (2022), 9694–9715. https://doi.org/10.3934/math.2022540 doi: 10.3934/math.2022540
    [16] L. Zhang, Strong limit theorems for extended independent random variables and extended negatively dependent random variables under sub-linear expectations, Acta Math. Sci., 42 (2022), 467–490. https://doi.org/10.1007/s10473-022-0203-z
    [17] F. Hu, Z. Chen, D. Zhang, How big are the increments of G-Brownian motion, Sci. China Math., 57 (2014), 1687–1700. https://doi.org/10.1007/s11425-014-4816-0 doi: 10.1007/s11425-014-4816-0
    [18] F. Gao, M. Xu, Large deviations and moderate deviations for independent random variables under sublinear expectations, Sci. China Math., 41 (2011), 337–352. https://doi.org/10.1360/012009-879 doi: 10.1360/012009-879
    [19] A. Kuczmaszewska, Complete convergence for widely acceptable random variables under the sublinear expectations, J. Math. Anal. Appl., 484 (2020), 123662. https://doi.org/10.1016/j.jmaa.2019.123662 doi: 10.1016/j.jmaa.2019.123662
    [20] M. Xu, K. Cheng, Precise asymptotics in the law of the iterated logarithm under sublinear expectations, Math. Probl. Eng., 2021 (2021), 6691857. https://doi.org/10.1155/2021/6691857 doi: 10.1155/2021/6691857
    [21] M. Xu, K. Cheng, Equivalent conditions of complete th moment convergence for weighted sums of IID random variables under sublinear expectations, Discrete Dyn. Nat. Soc., 2021 (2021), 7471550. https://doi.org/10.1155/2021/7471550 doi: 10.1155/2021/7471550
    [22] M. Xu, K. Cheng, Convergence for sums of iid random variables under sublinear expectations, J. Inequal. Appl., 2021 (2021), 157. https://doi.org/10.1186/s13660-021-02692-x doi: 10.1186/s13660-021-02692-x
    [23] M. Xu, K. Cheng, Note on precise asymptotics in the law of the iterated logarithm under sublinear expectations, Math. Probl. Eng., 2022 (2022), 6058563. https://doi.org/10.1155/2022/6058563 doi: 10.1155/2022/6058563
    [24] P. Hsu, H. Robbins, Complete convergence and the law of large numbers, PNAS, 33 (1947), 25–31. https://doi.org/10.1073/pnas.33.2.25 doi: 10.1073/pnas.33.2.25
    [25] Y. Chow, On the rate of moment convergence of sample sums and extremes, Bull. Inst. Math. Acad. Sin., 16 (1988), 177–201.
    [26] Y. Zhang, X. Ding, Further research on complete moment convergence for moving average process of a class of random variables, J. Inequal. Appl., 2017 (2017), 46. https://doi.org/10.1186/s13660-017-1322-2 doi: 10.1186/s13660-017-1322-2
    [27] B. Meng, D. Wang, Q. Wu, Complete convergence and complete moment convergence for weighted sums of extended negatively dependent random variables, Commun. Stat.-Theor. M., 51 (2022), 3847–3863. https://doi.org/10.1080/03610926.2020.1804587 doi: 10.1080/03610926.2020.1804587
    [28] M. Ko, Complete moment convergence of moving average process generated by a class of random variables, J. Inequal. Appl., 2015 (2015), 225. https://doi.org/10.1186/s13660-015-0745-x doi: 10.1186/s13660-015-0745-x
    [29] B. Meng, D. Wang, Q. Wu, Convergence of asymptotically almost negatively associated random variables with random coefficients, Commun. Stat.-Theor. M., in press. https://doi.org/10.1080/03610926.2021.1963457
    [30] S. Hosseini, A. Nezakati, Complete moment convergence for the dependent linear processes with random coefficients, Acta Math. Sin., Engl. Ser., 35 (2019), 1321–1333. https://doi.org/10.1007/s10114-019-8205-z doi: 10.1007/s10114-019-8205-z
    [31] W. Stout, Almost sure convergence, New York: Academic Press, 1974.
  • This article has been cited by:

    1. Mingzhou Xu, Xuhang Kong, Note on complete convergence and complete moment convergence for negatively dependent random variables under sub-linear expectations, 2023, 8, 2473-6988, 8504, 10.3934/math.2023428
    2. Mingzhou Xu, Kun Cheng, Wangke Yu, Convergence of linear processes generated by negatively dependent random variables under sub-linear expectations, 2023, 2023, 1029-242X, 10.1186/s13660-023-02990-6
    3. Mingzhou Xu, Complete convergence of moving average processes produced by negatively dependent random variables under sub-linear expectations, 2023, 8, 2473-6988, 17067, 10.3934/math.2023871
    4. Mingzhou Xu, Complete convergence and complete moment convergence for maximal weighted sums of extended negatively dependent random variables under sub-linear expectations, 2023, 8, 2473-6988, 19442, 10.3934/math.2023992
    5. Lunyi Liu, Qunying Wu, Complete integral convergence for weighted sums of negatively dependent random variables under sub-linear expectations, 2023, 8, 2473-6988, 22319, 10.3934/math.20231138
    6. Mingzhou Xu, On the complete moment convergence of moving average processes generated by negatively dependent random variables under sub-linear expectations, 2024, 9, 2473-6988, 3369, 10.3934/math.2024165
    7. Mingzhou Xu, Xuhang Kong, Complete qth moment convergence of moving average processes for m-widely acceptable random variables under sub-linear expectations, 2024, 214, 01677152, 110203, 10.1016/j.spl.2024.110203
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1657) PDF downloads(66) Cited by(7)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog