Research article

Complete convergence and complete integral convergence for weighted sums of widely acceptable random variables under the sub-linear expectations

  • Since the concept of sub-linear expectation space was put forward, it has well supplemented the deficiency of the theoretical part of probability space. In this paper, we establish the complete convergence and complete integration convergence for weighted sums of widely acceptable (abbreviated as WA) random variables under the sub-linear expectations with the different conditions. We extend the complete moment convergence in probability space to sublinear expectation space.

    Citation: Chengcheng Jia, Qunying Wu. Complete convergence and complete integral convergence for weighted sums of widely acceptable random variables under the sub-linear expectations[J]. AIMS Mathematics, 2022, 7(5): 8430-8448. doi: 10.3934/math.2022470

    Related Papers:

    [1] Lunyi Liu, Qunying Wu . Complete integral convergence for weighted sums of negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(9): 22319-22337. doi: 10.3934/math.20231138
    [2] Shuyan Li, Qunying Wu . Complete integration convergence for arrays of rowwise extended negatively dependent random variables under the sub-linear expectations. AIMS Mathematics, 2021, 6(11): 12166-12181. doi: 10.3934/math.2021706
    [3] Mingzhou Xu, Xuhang Kong . Note on complete convergence and complete moment convergence for negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(4): 8504-8521. doi: 10.3934/math.2023428
    [4] He Dong, Xili Tan, Yong Zhang . Complete convergence and complete integration convergence for weighted sums of arrays of rowwise $ m $-END under sub-linear expectations space. AIMS Mathematics, 2023, 8(3): 6705-6724. doi: 10.3934/math.2023340
    [5] Xiaocong Chen, Qunying Wu . Complete convergence and complete integral convergence of partial sums for moving average process under sub-linear expectations. AIMS Mathematics, 2022, 7(6): 9694-9715. doi: 10.3934/math.2022540
    [6] Mingzhou Xu . Complete convergence and complete moment convergence for maximal weighted sums of extended negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(8): 19442-19460. doi: 10.3934/math.2023992
    [7] Mingzhou Xu, Kun Cheng, Wangke Yu . Complete convergence for weighted sums of negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2022, 7(11): 19998-20019. doi: 10.3934/math.20221094
    [8] Mingzhou Xu . Complete convergence of moving average processes produced by negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(7): 17067-17080. doi: 10.3934/math.2023871
    [9] Mingzhou Xu . On the complete moment convergence of moving average processes generated by negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2024, 9(2): 3369-3385. doi: 10.3934/math.2024165
    [10] Lizhen Huang, Qunying Wu . Precise asymptotics for complete integral convergence in the law of the logarithm under the sub-linear expectations. AIMS Mathematics, 2023, 8(4): 8964-8984. doi: 10.3934/math.2023449
  • Since the concept of sub-linear expectation space was put forward, it has well supplemented the deficiency of the theoretical part of probability space. In this paper, we establish the complete convergence and complete integration convergence for weighted sums of widely acceptable (abbreviated as WA) random variables under the sub-linear expectations with the different conditions. We extend the complete moment convergence in probability space to sublinear expectation space.



    The classical limit theorem considers additive probability and additive expectation, which is suitable for the case of model determination, but this assumption of additivity is not feasible in many fields of practice. As a mathematical theory, nonlinear expectation can be analyzed and calculated under the uncertainty of the mathematical model. In its research, sub-linear expectation plays a special role and is the most studied. Peng [1,2,3] put forward the concept of generalization of sublinear expectation space in 2006, which transforms the probability in probability space into the capacity in sublinear expectation space, which enriches the theoretical part of probability space. Then, after Zhang's [4,5,6,7] research in sublinear expectation space, some important inequalities are obtained. These inequalities are a powerful tool for us to study sublinear expectation space. In addition, Zhang also studies the law of iterated logarithm and the strong law of large numbers in sublinear expectation space. After further extension, Wu and Jiang [8] obtained the Marcinkiewicz type strong law of numbers and the Chover type iterated logarithm law for more general cases in sublinear expectation space.

    In probability space, the complete convergence and complete moment convergence are two very important research parts. The notion of complete convergence was proposed by Hsu and Robbins [9] in 1947. In 1988, Chow [10] introduced the concept of complete moment convergence. The complete moment convergence is stronger than the complete convergence. The complete convergence and complete moment convergence in probability space have been relatively mature. For example, Qiu [11], Wu [12], and Shen [13] respectively obtained the complete convergence and complete moment convergence for independent identically distributed (i.i.d.), negatively associated (NA), extended negatively dependent (END) random variables sequence in probability space. Due to many methods and tools in probability space, sublinear expectation space can not be used, which increases the difficulty of studying sublinear expectation space, but many scholars have done the research, such as Wu [14] pushed the theorem in Wu [12] from probability space to sublinear expectation space. Feng [15] and Liang [16] obtained the complete convergence and complete integral convergence for arrays of row-wise ND and END random variables respectively. Zhong [17] studied the complete convergence and complete integral convergence for the weighted sum of END random variables. Lu [18] obtained more extensive conditions and conclusions than Zhong [17] in sublinear expectation space. The exponential inequality used in this article was proposed by Anna [19] in 2020. In the inequality, it is assumed that the truncated random variable sequence is a WA random variable sequence. Because it was proposed later, there is little research on WA random variable sequence in sublinear expectation space. Hu [20] proved the complete convergence for weighted sums of WA random variables in 2021.

    The organizational structure of this paper is as follows. In Section 2, we summarize some basic symbols and concepts, as well as the related properties in sublinear expectation space, and give a preliminary lemma which is helpful to obtain the main results. In Section 3, We deduce [21] from probability space to sublinear expectation space, obtain the corresponding conclusions, and prove the complete convergence and complete integral convergence for the weighted sums of WA random variables in sublinear expectation space.

    We use the framework and notations of Peng [1,2,3]. Let (Ω,F) be a given measurable space and let H be a linear space of real functions defined on (Ω,F) such that if X1,X2,,XnH then φ(X1,X2,,Xn)H for each φCl,Lip(Rn), where Cl,Lip(Rn) denotes the linear space of (local Lipschitz) functions φ satisfying

    |φ(x)φ(y)|c(1+|x|m+|y|m)|xy|, x,yRn,

    for some c>0, mN depending on φ. H is considered as a space of random variables. In this case, we denote XH.

    Definition 2.1. A sub-linear expectation ˆE, on H is a function ˆE:H[,+] satisfying the following properties: For all X,YH, we have

    (a) Monotonicity: If XY, then ˆEXˆEY;

    (b) Constant preserving: ˆE(c)=c, for cR;

    (c) Sub-additivity: ˆE(X + Y)ˆEX+ˆEY whenever ˆEX+ˆEY is not of the form  +  or +;

    (d) Positive homogeneity: ˆE(λX)=λˆEX,λ0. Convention: when λ = 0 and ˆEX = + , ˆE(λX)=λˆEX = 0.

    The triple (Ω,H,ˆE) is called a sub-linear expectation space.

    Given a sub-linear expectation ˆE, let us denote the conjugate expectation ˆε of ˆE by

    ˆεX:=ˆE(X), XH.

    From the definition, it is easily shown that for all X,YH,

    ˆεXˆEX,ˆE(X+c)=ˆEX+c,|ˆE(XY)|ˆE|XY| and ˆE(XY)ˆEXˆEY.

    If ˆEY=ˆεY, then ˆE(X+aY)=ˆEX+aˆEY for any aR. Next, we consider the capacities corresponding to the sub-linear expectations. Let GF. A function V:G[0,1] is called a capacity if

    V()=0,V(Ω)=1;  and  V(A)V(B)  for  AB,A,BG.

    It is called sub-additive if V(AB)V(A)+V(B) for all A,BG with ABG In the sub-linear space (Ω,H,ˆE), we denote a pair (V,V) of capacities by

    V(A):=inf{ˆEξ:I(A)ξ, ξH},V(A):=1V(Ac), AF,

    where Ac is the complement set of A. By definition of V and V, it is obvious that V is sub-additive, and

    V(A)V(A), AF.

    If fI(A)g, f,gH, then

    ˆEfV(A)ˆEg,ˆεfV(A)ˆεg. (2.1)

    This implies Markov inequality: XH,

    V(|X|x)ˆE(|X|p)/ˆE(|X|p)xpxp,x>0,p>0.

    From I(|X|>x)|X|p/|X|pxpxpH. From Lemma 4.1 in Zhang [5], we have Hӧlder inequality: X,YH,p,q>1, satisfying p1+q1=1,

    ˆE|XY|(ˆE|X|p)1/p(ˆE|Y|q)1/q,

    whenever

    ˆE(|X|p)<, ˆE(|Y|q)<.

    Particularly, Jensen inequality:

    (ˆE|X|r)1/1rr(ˆE|X|s)1/1ss,for  0<rs.

    We define the Choquet integrals (CV,CV) by

    CV(X)=0V(Xx)dx+0[V(Xx)1]dx,

    with V being replaced by V and V, respectively.

    Definition 2.2.

    (i) ˆE countably sub-additive: ˆE is called to be countably sub-additive if it satisfies ˆE(X)n=1ˆE(Xn), where Xn=1Xn, X,XnH,X0,Xn0,n1.

    (ii) V is called to be countably sub-additive if

    V(n=1An)n=1V(An), AnF.

    Definition 2.3. (Identical distribution) Let X1 and X2 be two random variables defined severally in sub-linear expectation space (Ω1,H1,ˆE1) and (Ω2,H2,ˆE2). They are called identically distributed if

    ˆE1(φ(X1))=ˆE2(φ(X2)), φCl,Lip(R),

    whenever the sub-linear expectations are finite. A sequence {Xn;n1} of random variables is said to be identical distribution if Xi and X1 are identical distribution for each i1.

    Definition 2.4. (WA) Let {Yn;n1} be a sequence of random variables in a sub-linear expectation space (Ω,H,ˆE). The sequence {Yn;n1} is called WA if for t0 and for all nN

    ˆEexp(ni=1tYi)g(n)ni=1ˆEexp(tYi), (2.2)

    where 0<g(n)<.

    Definition 2.5. [22] A function L:(0,)(0,) is:

    (i) A slowly varying function (at infinity), if for any a>0

    limxL(ax)L(x)=1,

    (ii) A regularly varying function with index α>0, if for any a>0

    limxL(ax)L(x)=aα.

    Lemma 2.6. [22] Every regularly varying function (with index α>0) l:(0,)(0,) is of the form

    l(x)=xαL(x),

    where L is a slowly varying function.

    In the following, let {Xn;n1} be a sequence of random variables in (Ω,H,ˆE). The symbol c stands for a generic positive constant which may differ from one place to another. Let axbx denote limxax/axbxbx=1. anbn denote that there exists a constant c>0 such that ancbn for sufficiently large n, and I() denotes an indicator function. ab means to take the maximum value of a and b, while ab means to take the minimum value of a and b.

    To prove our results, we need the following lemmas.

    In [17], we can get the following lemma.

    Lemma 2.7. [17] Suppose XH,α>0,p>0, and l(x) is a slow varying function.

    (i) Then, for c>0,

    CV(|X|pl(|X|1/α))<  n=1nαp1l(n)V(|X|>cnα)<,

    taking l(x)=1 and logx, respectively, we can get that for c>0,

    CV(|X|p)<  n=1nαp1V(|X|>cnα)<,
    CV(|X|plog|X|)<  n=1nαp1lognV(|X|>cnα)<.

    (ii) If CV(|X|pl(|X|1/1αα))<, then for any θ>1 and c>0,

    k=1θkαpl(θk)V(|X|>cθkα)<,

    taking l(x)=1 and logx, respectively, we have

    CV(|X|p)<  k=1θkαpV(|X|>cθkα)<,
    CV(|X|plog|X|)<  k=1θkαp(logθk)V(|X|>cθkα)<.

    The last one is the exponential inequality for WA random variables, which was can be found in [19].

    Lemma 2.8. [19] Let {X1,X2,,Xn} be a sequence of random variables in (Ω,H,ˆE), with ˆEXi0 for 1in. Let d>0 be a real number, we define X(d)=min{X,d}. Assume that Yi:=X(d)i, 1in satisfy (2.2) for all t>0. Then, for all x>0, we have

    V(Snx)V(max1inXi>d)+g(n)exp(xdxdln(1+xdni=1ˆEXi2)).

    Next, we give the theorems and proof in this article.

    Let {Xn;n1} be a sequence of random variables in sub-linear expectation space (Ω,H,ˆE),α>1/122,αp>1,ε>0,δ>0 and β1=[α(p2)1]ε4(αp1+δ)>0. For fixed n1, denote for 1in that

    Yi=β1nαI(Xi<β1nα)+XiI(|Xi|β1nα)+β1nαI(Xi>β1nα). (3.1)

    Theorem 3.1 Let α>1/122,αp>1 and {Xn;n1} be a sequence of random variables in (Ω,H,ˆE) with ˆEXi=ˆεXi=0 if p>1 such that sequence {Yi;1in} of truncated random variables is WA and control coefficient g(n) in (2.2) is regularly varying function with index δ for some δ>0. Assume that {ani;1in,n1} is an array of real numbers and there exist some q with q>max{2,p} have

    ni=1|ani|q=O(n), |ani|c (3.2)

    and there also exist a random variable XH and a constant c satisfying

    ˆE[f(Xn)]cˆE[f(X)], n1, 0fCl,Lip(R), (3.3)

    then

    ˆE|X|pCV(|X|p)<, (3.4)

    implies that for all ε>0

    n=1nαp2V(|ni=1aniXi|>εnα)<. (3.5)

    Let 0<β2<min{2(pr)2r,α[2(pr)]12(αp1+δ)}. For any 1in,n1, and tnαr, denote

    Yi=β2t1/rI(Xi<β2t1/r)+XiI(|Xi|β2t1/r)+β2t1/rI(Xi>β2t1/r). (3.6)

    Theorem 3.2. Let r>0,α>1/122,α(pr)>1 and {Xn;n1} be a sequence of random variables in (Ω,H,ˆE) with ˆEXi=ˆεXi=0 if pr>1 such that sequence {Yi;1in} of truncated random variables is WA and control coefficient g(n) in (2.2) is regularly varying function with index δ for some δ>0. Assume that {ani;1in,n1} is an array of real numbers and condition (3.2) holds for q>max{2,pr}, moreover, the condition (3.3) is also true, then

    {ˆE|X|prCV(|X|pr)<   if   rp;ˆE|X|plog|X|CV(|X|plog|X|)<   if   r=p; (3.7)

    implies that for any ε>0,

    n=1nαpαr2CV(|ni=1aniXi|εnα)r+<. (3.8)

    Remark. In Theorem 3.2, we extend the complete moment convergence for the weighted sums of random variables in the probability space of article [21] to the complete integral convergence for the weighted sums of WA random variables in sublinear expectation space.

    Proof of Theorem 3.1. Since ni=1aniXi=ni=1a+niXini=1aniXi, we have

    n=1nαp2V(|ni=1aniXi|>εnα)n=1nαp2V(|ni=1a+niXi|>εnα2)+n=1nαp2V(|ni=1aniXi|>εnα2).

    So, without loss of generality, we can assume that ani0 for 1in and n1.

    If we want to prove (3.5), we just need to prove

    n=1nαp2V(ni=1aniXi>εnα)<,ε>0. (3.9)

    Because of considering {Xn;n1} still satisfies the conditions in the Theorem 3.1, we can obtain

    n=1nαp2V(ni=1aniXi<εnα)<,ε>0. (3.10)

    Form (3.9) and (3.10), we can get (3.5). The following proves that (3.9) is established. The definition of {Yi;1in} is (3.1). For fixed n1, denote for 1in that

    Zi=XiYi=(Xi+β1nα)I(Xi<β1nα)+(Xiβ1nα)I(Xi>β1nα).

    It is easily checked that for ε>0,

    (ni=1aniXi>εnα)ni=1(|Xi|>β1nα)(ni=1aniYi>εnα). (3.11)

    So, we have

    n=1nαp2V(ni=1aniXi>εnα)n=1nαp2ni=1V(|Xi|>β1nα)+n=1nαp2V(ni=1aniYi>εnα)n=1nαp2ni=1V(|Xi|>β1nα)+n=1nαp2V(ni=1ani(YiˆEYi)>εnα|ni=1aniˆEYi|):=I1+I2.

    To prove (3.9), it suffices to show I1< and I2<.

    For 0<μ<1, let g(x)Cl,Lip(R) be a decreasing function when x0 such that 0g(x)1 for all x and g(x)=1 if |x|μ, g(x)=0 if |x|1. Then

    I(|x|μ)g(|x|)I(|x|1),I(|x|>1)1g(|x|)I(|x|>μ). (3.12)

    By (3.12), Lemma 2.7 (i) and (3.3), we can get that

    I1n=1nαp2ni=1ˆE(1g(|Xi|β1nα)) cn=1nαp1ˆE(1g(|X|β1nα)) cn=1nαp1V(|X|>cnα) <.

    In the following, we prove that I2<. Firstly, we will show that

    nα|ni=1aniˆEYi|0, n.

    By (3.2) and Hӧlder inequality, we have for any 0<ρ<q that

    ni=1aniρ(ni=1(aniq))ρ/ρqq(ni=11)1ρ/ρqqcn. (3.13)

    For any λ>0, by (3.12) and Cr inequality, we have

    |Yi|λ|Xi|λI(|Xi|β1nα)+β1λnαλI(|Xi|>β1nα)|Xi|λg(μ|Xi|β1nα)+β1λnαλ(1g(|Xi|β1nα)),|Zi|λ|Xi+β1nα|λI(Xi<β1nα)+|Xiβ1nα|λI(Xi>β1nα)|Xi|λ(1g(|Xi|β1nα)).

    Thus

    ˆE|Yi|λˆE|X|λg(μ|X|β1nα)+β1λnαλˆE(1g(|X|β1nα))
    ˆE|X|λg(μ|X|β1nα)+β1λnαλV(|X|>μβ1nα),
    ˆE|Zi|λˆE|Xi|λ(1g(|Xi|β1nα))ˆE|X|λ(1g(|X|β1nα)). (3.14)

    By Lemma 2.7 (i), we can get that

    n=1V(|X|>cnα)n=1nαp1V(|X|>cnα)<,

    and V(|X|>cnα), so we get nV(|X|>cnα)0 as n.

    When 0<p1. Since q>max{2,p}>1, by (3.13), (3.14) and αp>1, we have that

    nα|ni=1aniˆEYi|nαni=1aniˆE|Yi|
    n1α(ˆE|X|g(μ|X|β1nα)+β1nαV(|X|>cnα))
    =n1αˆE|X|g(μ|X|β1nα)+β1nV(|X|>cnα)
     cn1αpˆE|X|p0, n.

    When p>1. Since q>p>1, by (3.13), (3.14) and ˆEXi=0, we have

    nα|ni=1aniˆEYi|nαni=1aniˆE|XiYi|=nαni=1aniˆE|Zi|n1αˆE|X|(1g(|X|β1nα))cn1αpˆE|X|p0,n.

    Hence, nα|ni=1aniˆEYi|ε/ε22 for all n large enough, which implies that

    I2n=1nαp2V(ni=1ani(YiˆEYi)>εnα2).

    According to assume that sequence {Yi;1in} of truncated random variables is WA and ani0, by (2.2), we have

    ˆEexp(ni=1taniYi)g(n)ni=1ˆEexp(taniYi).

    Because exp(ni=1taniˆEYi)0, we can get that

    ˆEexp(ni=1tani(YiˆEYi))=exp(ni=1taniˆEYi)ˆEexp(ni=1taniYi) ni=1exp(taniˆEYi)g(n)ni=1ˆEexp(taniYi) =g(n)ni=1ˆEexp(tani(YiˆEYi)),

    which means that ani(YiˆEYi) are WA random variables. Without loss of generality, according to (3.2), we assume that ani1/122, then

    ani(YiˆEYi)ani(|Yi|+ˆE|Yi|)2aniβ1nαβ1nα.

    We can verify that ani(YiˆEYi)=min{ani(YiˆEYi), β1nα}.

    So {ani(YiˆEYi);1in,n1} satisfy the conditions in Lemma 2.8 with ˆE(ani(YiˆEYi))=0. Taking x=εnα2,d=β1nα=[α(p2)1]εnα4(αp1+δ) in Lemma 2.8, we obtain

    I2n=1nαp2V(ni=1ani(YiˆEYi)>εnα2)
     n=1nαp2[V(max1in(ani(YiˆEYi))>d)+g(n)exp(εnα2dεnα2dln(1+εnα2dni=1ˆE|ani(YiˆEYi)|2))] n=1nαp2ni=1V(|ani(YiˆEYi)|>β1nα)+cn=1nαp2g(n)(n2αni=1ˆE|ani(YiˆEYi)|2)ε2β1 :=I21+I22.

    Let β>0, gjβ(x)Cl,Lip(R), j1, suppose gjβ(x) is an even function, such that 0gjβ(x)1 for all x and gjβ(x)=1 if β2(j1)α/β2(j1)αμμ|x|β2jα/β2jαμμ, gjβ(x)=0 if |x|<β2(j1)α or |x|>(1+μ)β2jα/(1+μ)β2jαμμ. Then for any l>0,

     gjβ(X)I(β2α(j1)<|X|(1+μ)β2αj/(1+μ)β2αjμμ), |X|lg(μ|X|β2αk)βlμl+kj=1|X|lgjβ(X). (3.15)

    The truncation that defines Y as X is as follows

    Y=β1nαI(X<β1nα)+XI(|X|β1nα)+β1nαI(X>β1nα).

    According to Markov inequality, Cr inequality, (3.2), (3.3), (3.14), (3.15), Lemma 2.7, q>p and g(x) when x0. Then

    I21n=1nαp2nαqni=1aniqˆE|Yi|q n=1nαpαq1ˆE|Y|q n=1nαpαq1ˆE|X|qg(μ|X|β1nα)+n=1nαp1V(|X|>cnα) k=12k1n=2k1nαpαq1ˆE|X|qg(μ|X|β1nα) k=12k(pq)αˆE|X|qg(μ|X|β12kα) k=12k(pq)αˆE(β1qμq+kj=1|X|qgjβ1(X)) ck=12k(pq)α+k=12k(pq)αkj=1ˆE|X|qgjβ1(X) j=1ˆE|X|qgjβ1(X)k=j2k(pq)α j=12j(pq)αˆE|X|qgjβ1(X) j=12αpjV(|X|>c2jα)<.

    Next, we prove I22<. If p2, then d=(2α1)εnα4(αp1+δ), by (3.3), (3.4), (3.13), αp>1, Cr inequality, the condition of g(n), there exist a slowly varying function L(n), such that g(n) = nδL(n), we have

    I22n=1nαp2g(n)(n2αni=1ani2ˆEY2i)ε2β1 cn=1nαp2g(n)(n12αˆEX2)ε2β1 cn=1nαp2+δ2(αp1+δ)L(n) cn=1nαpL(n)<.

    If p<2, then d=(αp1)εnα4(αp1+δ), by (3.3), (3.4), (3.13), αp>1, Cr inequality, we have

    I22n=1nαp2g(n)(n2αni=1ani2ˆEY2i)ε2β1 cn=1nαp2g(n)[n2αni=1ani2(nα(2p)ˆE|X|p)]ε2β1 cn=1nαp2+δ2(αp1+δ)L(n) cn=1nαpL(n)<.

    Hence, the proof of Theorem 3.1 is completed.

    Proof of Theorem 3.2. Without loss of generality, we also can assume that ani0 for 1in and n1. Where, the definitions of g(x) and gjβ(x) are the same as in the proof of Theorem 3.1. For ε>0, we have that

     n=1nαpαr2CV(|ni=1aniXi|εnα)r+=n=1nαpαr20V(|ni=1aniXi|εnα>t1/r) dt=n=1nαpαr2nαr0V(|ni=1aniXi|εnα>t1/r) dt+n=1nαpαr2nαrV(|ni=1aniXi|εnα>t1/r) dtn=1nαp2V(|ni=1aniXi|>εnα)+n=1nαpαr2nαrV(|ni=1aniXi|>t1/r) dt:=J1+J2.

    According to Theorem 3.1, we have J1<. So if we want to prove (3.8), we just need to prove J2<. Hence, we first to prove

    H:=n=1nαpαr2nαrV(ni=1aniXi>t1/r) dt. (3.16)

    The definition of {Yi;1in} is (3.6). For any 1in,n1, and tnαr, denote

    Zi=(Xi+β2t1/r)I(Xi<β2t1/r)+(Xiβ2t1/r)I(Xi>β2t1/r). (3.17)

    We have

    Hn=1nαpαr2nαrni=1V(|Xi|>β2t1/r)dt+n=1nαpαr2nαrV(ni=1aniYi>t1/r)dt n=1nαpαr2nαrni=1V(|Xi|>β2t1/r)dt+n=1nαpαr2nαrV(ni=1ani(YiˆEYi)>t1/r|ni=1aniˆEYi|)dt :=H1+H2.

    In order to prove H<, it suffices to show H1< and H2<. Firstly, we prove H1<, by (3.7), (3.12), Lemma 2.7 (i), g(x) when x0, we have

    H1n=1nαpαr2nαrni=1ˆE(1g(|Xi|β2t1/r))dt n=1nαpαr1nαrˆE(1g(|X|β2t1/r))dt =n=1nαpαr1m=n(m+1)αrmαrˆE(1g(|X|β2t1/r))dt n=1nαpαr1m=nmαr1ˆE(1g(|X|β2mα)) = m=1mαr1ˆE(1g(|X|β2mα))mn=1nαpαr1 {m=1mαp1V(|X|>μβ2mα)   if   r<p;m=1mαp1logmV(|X|>μβ2mα)   if   r=p;m=1mαr1V(|X|>μβ2mα)   if   r>p; = {m=1mα(pr)1V(|X|>cmα)<   if   rp;m=1mαp1logmV(|X|>cmα)<   if   r=p.

    Then, we prove H2<. Firstly, we will show that

    suptnαrt1/r|ni=1aniˆEYi|0, n.

    Similar to (3.14), by (3.12), (3.17), Cr inequality, for any λ>0, we can get that

    ˆE|Yi|λˆE|X|λg(μ|X|β2t1/r)+β2λtλ/rV(|X|>μβ2t1/r), ˆE|Zi|λˆE|Xi|λ(1g(|Xi|β2t1/r))ˆE|X|λ(1g(|X|β2t1/r)). (3.18)

    The truncation that defines Y as X is as follows

    Y=β2t1/rI(X<β2t1/r)+XI(|X|β2t1/r)+β2t1/rI(X>β2t1/r).

    When 0<pr1. Since tnαr, ˆE|X|pr<, and α(pr)>1, we get

    suptnαrt1/r|ni=1aniˆEYi|suptnαrt1/rnˆE|Y| suptnαrt1/rn(ˆE|X|g(μ|X|β2t1/1rr)+β2t1/1rrV(|X|>μβ2t1/1rr)) = suptnαrt1/rn(ˆE|X|(pr)|X|1(pr)g(μ|X|β2t1/1rr)+β2t1/1rrV(|X|>μβ2t1/1rr)) cn1α(pr)ˆE|X|pr+β2nV(|X|>cnα)0, n.

    When pr>1. Since ˆEXi=0 and tnαr, we can get that

    suptnαrt1/r|ni=1aniˆEYi|suptnαrt1/rni=1ani|ˆEXiˆEYi| suptnαrt1/rni=1aniˆE|Zi| cn1αˆE|X|(1g(|X|β2nα)) cn1αˆE|X||X|pr1nα(pr1)(1g(|X|μβ2nα)) cn1α(pr)ˆE|X|pr0, n.

    It follows that for all n large enough,

    suptnαrt1/r|ni=1aniˆEYi|<12,

    which imply that

    H2n=1nαpαr2nαrV(ni=1ani(YiˆEYi)>t1/r2)dt.

    For fixed tnαr and n1, through the definition in Theorem 3.2 and assume that ani1/122, we know that {ani(YiˆEYi);1in,n1} are WA random variables with ˆE(ani(YiˆEYi)) = 0 and ani(YiˆEYi)=min{ani(YiˆEYi), β2t1/1rr}. Use Lemma 2.8 for V(ni=1ani(YiˆEYi)>t1/r/t1/r22), taking 0<β2<min{2(pr)2r,α[2(pr)]12(αp1+δ)}, d=β2t1/1rr, x=t1/1rr/t1/1rr22, we have

    V(ni=1ani(YiˆEYi)>t1/r2),V(max1in(ani(YiˆEYi))>d)+g(n)exp(xdxdln(1+xdni=1ˆE|ani(YiˆEYi)|2))ni=1V(|ani(YiˆEYi)|>ct1/r)+cg(n)(t2/rni=1ˆE|ani(YiˆEYi)|2)12β2,

    thus

    H2n=1nαpαr2nαrni=1V(|ani(YiˆEYi)|>ct1/r)dt +cn=1αpαr2g(n)nαr(t2/rni=1ˆE|ani(YiˆEYi)|2)12β2dt :=H21+H22.

    So, to prove H2<, we first need to prove H21<. By Markov inequality, Cr inequality, (3.12), (3.15), (3.17), Lemma 2.7 (ii), q>pr and H1<, we have that

    H21n=1nαpαr2nαr(tq/qrrni=1aniqˆE|Yi|q)dt n=1nαpαr1nαrtq/qrr(ˆE|X|qg(μ|X|β2t1/r)+β2qtq/rˆE(1g(|X|μβ2t1/r)))dt n=1nαpαr1nαrtq/qrrˆE|X|qg(μ|X|β2t1/r)dt+cn=1nαpαr1nαrˆE(1g(|X|μβ2t1/r))dt n=1nαp1αrm=n(m+1)αrmαrtq/rˆE|X|qg(μ|X|β2t1/r)dt n=1nαp1αrm=nmαrαq1ˆE|X|qg(μ|X|β2(m+1)α) =m=1mαrαq1ˆE|X|qg(μ|X|β2(m+1)α)mn=1nαp1αr {m=1mα(pr)αq1ˆE|X|qg(μ|X|β2(m+1)α)   if   rp;m=1mαrαq1logmˆE|X|qg(μ|X|β2(m+1)α)   if   r=p;
     ={k=12k1m=2k1mα(pr)αq1ˆE|X|qg(μ|X|β2(m+1)α)   if   rp;k=12k1m=2k1mαrαq1logmˆE|X|qg(μ|X|β2(m+1)α)   if   r=p;   {k=12k[α(pr)αq]ˆE|X|qg(μ|X|β22kα)   if   rp;k=12k(αrαq)log2kˆE|X|qg(μ|X|β22kα)  if   r=p; {k=12k[α(pr)αq]ˆE(β2qμq+kj=1|X|qgjβ2(X))  if   rp;k=12k(rq)αlog2kˆE(β2qμq+kj=1|X|qgjβ2(X))  if   r=p; {k=12k[α(pr)αq]+k=12k[α(pr)αq]kj=1ˆE|X|qgjβ2(X)  if   rp;k=12k(rq)αlog2k+k=12k(rq)αlog2kkj=1ˆE|X|qgjβ2(X)  if   r=p; {j=1ˆE|X|qgjβ2(X)k=j2k[α(pr)αq]  if   rp;j=1ˆE|X|qgjβ2(X)k=j2k(rq)αlog2k  if   r=p; {j=12α(pr)jV(|X|>c2jα)<   if   rp;j=12αrjlog2jV(|X|>c2jα)<   if   r=p.

    Then, we prove H22<. Similar to previous proof, we consider the following two situations.

    If (pr)2. By β2<1r,αp2+δ+12α2β2<1, (3.3), (3.7), (3.13), Cr inequality, we have

    H22n=1nαpαr2g(n)nαr(t2/rni=1ani2ˆEYi2)12β2dt cn=1nαpαr2+12β2g(n)nαr(t2/rˆEX2)12β2dt cn=1nαpαr2+12β2g(n)nαrt1rβ2dt
     cn=1nαp2+δ+12α2β2L(n)<.

    If (pr)<2. By β2<pr2r,αp2+δ+1(pr)α2β2<1, ˆE|X|pr<, Cr inequality, we have

    H22n=1nαpαr2g(n)nαr(t2/rni=1ani2ˆEYi2)12β2dt cn=1nαpαr2g(n)nαr(tprrni=1ani2ˆE|X|pr)12β2dt cn=1nαpαr2+12β2g(n)nαrtpr2rβ2dt cn=1nαp2+δ+1(pr)α2β2L(n)<.

    We have proved (3.16). Because of considering {Xn;n1} instead of {Xn;n1} in Theorem 3.2, Theorem 3.2 still holds. Then we can obtain

    n=1nαpαr2nαrV(ni=1aniXi<t1/r) dt<. (3.19)

    According to (3.16) and (3.19), we can get J2<. Hence, the finishes the proof of Theorem 3.2.

    In conclusion, we prove the complete convergence and complete integral convergence for weighted sums of WA random variables under the sub-linear expectations.

    In this paper, we extend the conclusion in probability space to sublinear expectation space and obtain the complete convergence and complete integral convergence for weighted sums of WA random variables under the sub-linear expectations, which enriches the limit theory research of WA random variable sequence in sublinear expectation space. In the future work, we will establish the corresponding inequalities in the sublinear expectation space according to the existing important inequalities and moment inequalities in the probability space, overcome the problems caused by the sub-additive of V and ˆE, and generalize the complete convergence and complete integral convergence in the sublinear expectation to obtain a conclusion similar to that in the original probability space.

    This paper was supported by the National Natural Science Foundation of China (12061028) and Guangxi Colleges and Universities Key Laboratory of Applied Statistics.

    All authors declare no conflicts of interest in this paper.



    [1] S. G. Peng, G. Expectation, G-Brownian motion and related stochastic calculus of Itô type, Stoch. Anal. Appl., 2 (2006), 541-567. http://doi.org/10.1007/978-3-540-70847-6_25 doi: 10.1007/978-3-540-70847-6_25
    [2] S. G. Peng, Multi-dimensional G-Brownian motion and related stochastic calculus under G-expectation, Stoch. Proc. Appl., 118 (2008), 2223-2253. http://doi.org/10.1016/j.spa.2007.10.015 doi: 10.1016/j.spa.2007.10.015
    [3] S. G. Peng, Survey on normal distributions, central limit theorem, Brownian motion and the related stochastic calculus under sublinear expectations, Sci. China Ser. A-Math., 52 (2009), 1391-1411. http://doi.org/10.1007/s11425-009-0121-8 doi: 10.1007/s11425-009-0121-8
    [4] L. X. Zhang, Rosenthal's inequalities for independent and negatively dependent random variables under sub-linear expectations with applications, Sci. China-Math., 59 (2016), 751-768. http://doi.org/10.1007/s11425-015-5105-2 doi: 10.1007/s11425-015-5105-2
    [5] L. X. Zhang, Exponential inequalities under the sub-linear expectations with applications to laws of the iterated logarithm, Sci. China-Math., 59 (2016), 2503-2526. http://doi.org/10.1007/s11425-016-0079-1 doi: 10.1007/s11425-016-0079-1
    [6] L. X. Zhang, Self-normalized moderate deviation and laws of the iterated logarithm under g-expectation, Commun. Math. Stat., 4 (2016), 229-263. https://doi.org/10.1007/s40304-015-0084-8 doi: 10.1007/s40304-015-0084-8
    [7] L. X. Zhang, Strong limit theorems for extended independent and extended negatively dependent random variables under non-linear expectations, 2016. Available from: http://arXiv.org/abs/1608.00710v1.
    [8] Q. Y. Wu, Y. Y. Jiang, Strong law of large numbers and Chover's law of the iterated logarithm under sub-linear expectations, J. Math. Anal. Appl., 460 (2017), 252-270. http://doi.org/10.1016/j.jmaa.2017.11.053 doi: 10.1016/j.jmaa.2017.11.053
    [9] P. L. Hsu, H. Robbins, Complete convergence and the law of large numbers, P. Natl. A. Sci. USA, 33 (1947), 25-31. http://doi.org/10.1073/pnas.33.2.25 doi: 10.1073/pnas.33.2.25
    [10] Y. S. Chow, On the rate of moment convergence of sample sums and extremes, Bull. Inst. Math. Acad. Sinica, 16 (1988), 177-201.
    [11] D. H. Qiu, P. Y. Chen, Complete and complete moment convergence for i.i.d. random variables under exponential moment conditions, Commun. Stat.-Theory Methods, 46 (2017), 4510-4519. http://doi.org/10.1080/03610926.2015.1085566 doi: 10.1080/03610926.2015.1085566
    [12] Q. Y. Wu, Y. Y. Jiang, Complete convergence and complete moment convergence for negatively associated sequences of random variables, J. Inequal. Appl., 2016 (2016), 157. http://doi.org/10.1186/s13660-016-1107-z doi: 10.1186/s13660-016-1107-z
    [13] A. T. Shen, Y. Zhang, W. J. Wang, Complete convergence and complete moment convergence for extended negatively dependent random variables, Filomat, 31 (2017), 1381-1394. http://doi.org/10.2298/FIL1705381S doi: 10.2298/FIL1705381S
    [14] Q. Y. Wu, Y. Y. Jiang, Complete convergence and complete moment convergence for negatively dependent random variables under sub-linear expectations, Filomat, 34 (2020), 1093-1104. http://doi.org/10.2298/FIL2004093W doi: 10.2298/FIL2004093W
    [15] F. X. Feng, D. C. Wang, Q. Y. Wu, H. W. Huang, Complete and complete moment convergence for weighted sums of arrays of row wise negatively dependent random variables under the sub-linear expectations, Commun. Stat.-Theory Methods, 50 (2021), 594-608. http://doi.org/10.1080/03610926.2019.1639747 doi: 10.1080/03610926.2019.1639747
    [16] Z. W. Liang, Q. Y. Wu, Theorems of complete convergence and complete integral convergence for END random variables under sub-linear expectations, J. Inequal. Appl., 2019 (2019), 114. http://doi.org/10.1186/s13660-019-2064-0 doi: 10.1186/s13660-019-2064-0
    [17] H. Y. Zhong, Q. Y. Wu, Complete convergence and complete moment convergence for weighted sums of extended negatively dependent random variables under sub-linear expectation, J. Inequal. Appl., 2017 (2017), 261. http://doi.org/10.1186/s13660-017-1538-1 doi: 10.1186/s13660-017-1538-1
    [18] D. W. Lu, Y. Meng, Complete and complete integral convergence for arrays of row wise widely negative dependent random variables under the sub-linear expectations, Commun. Stat.-Theory Methods, 2020, 1786585. http://doi.org/10.1080/03610926.2020.1786585 doi: 10.1080/03610926.2020.1786585
    [19] K. Anna, Complete convergence for widely acceptable random variables under sublinear expectations, J. Math. Anal. Appl., 484 (2020), 123662. http://doi.org/10.1016/j.jmaa.2019.123662 doi: 10.1016/j.jmaa.2019.123662
    [20] R. Hu, Q. Y. Wu, Complete convergence for weighted sums of widely acceptable random variables under sublinear expectations, Discrete Dyn. Nature Soc., 2021 (2021), 5526609. http://doi.org/10.1155/2021/5526609 doi: 10.1155/2021/5526609
    [21] M. M. Ge, X. Deng, Complete moment convergence for weighted sums of extended negatively dependent random variables, J. Math. Inequal., 13 (2019), 159-175. http://doi.org/10.7153/jmi-2019-13-12 doi: 10.7153/jmi-2019-13-12
    [22] E. Seneta, Regularly varying functions, Lecture Notes in Mathematics, Springer, Berlin, Heidelberg, 508 (1976), 1-52. https://doi.org/10.1007/BFb0079658
  • This article has been cited by:

    1. He Dong, Xili Tan, Yong Zhang, Complete convergence and complete integration convergence for weighted sums of arrays of rowwise $ m $-END under sub-linear expectations space, 2023, 8, 2473-6988, 6705, 10.3934/math.2023340
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1875) PDF downloads(79) Cited by(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog