Loading [MathJax]/jax/output/SVG/jax.js
Research article

Complete convergence and complete integration convergence for weighted sums of arrays of rowwise m-END under sub-linear expectations space

  • Received: 19 October 2022 Revised: 10 December 2022 Accepted: 01 January 2023 Published: 09 January 2023
  • MSC : 60F15

  • In this paper, we study the complete convergence and the complete integration convergence for weighted sums of m-extended negatively dependent (m-END) random variables under sub-linear expectations space with the condition of ˆE|X|pCV(|X|p)<, p>1/α and α>3/2. We obtain the results that can be regarded as the extensions of complete convergence and complete moment convergence under classical probability space. In addition, the Marcinkiewicz-Zygmund type strong law of large numbers for weighted sums of m-END random variables under the sub-linear expectations space is proved.

    Citation: He Dong, Xili Tan, Yong Zhang. Complete convergence and complete integration convergence for weighted sums of arrays of rowwise m-END under sub-linear expectations space[J]. AIMS Mathematics, 2023, 8(3): 6705-6724. doi: 10.3934/math.2023340

    Related Papers:

    [1] Shuyan Li, Qunying Wu . Complete integration convergence for arrays of rowwise extended negatively dependent random variables under the sub-linear expectations. AIMS Mathematics, 2021, 6(11): 12166-12181. doi: 10.3934/math.2021706
    [2] Lunyi Liu, Qunying Wu . Complete integral convergence for weighted sums of negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(9): 22319-22337. doi: 10.3934/math.20231138
    [3] Chengcheng Jia, Qunying Wu . Complete convergence and complete integral convergence for weighted sums of widely acceptable random variables under the sub-linear expectations. AIMS Mathematics, 2022, 7(5): 8430-8448. doi: 10.3934/math.2022470
    [4] Xiaocong Chen, Qunying Wu . Complete convergence and complete integral convergence of partial sums for moving average process under sub-linear expectations. AIMS Mathematics, 2022, 7(6): 9694-9715. doi: 10.3934/math.2022540
    [5] Mingzhou Xu, Xuhang Kong . Note on complete convergence and complete moment convergence for negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(4): 8504-8521. doi: 10.3934/math.2023428
    [6] Mingzhou Xu, Kun Cheng, Wangke Yu . Complete convergence for weighted sums of negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2022, 7(11): 19998-20019. doi: 10.3934/math.20221094
    [7] Mingzhou Xu . Complete convergence and complete moment convergence for maximal weighted sums of extended negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(8): 19442-19460. doi: 10.3934/math.2023992
    [8] Lizhen Huang, Qunying Wu . Precise asymptotics for complete integral convergence in the law of the logarithm under the sub-linear expectations. AIMS Mathematics, 2023, 8(4): 8964-8984. doi: 10.3934/math.2023449
    [9] Mingzhou Xu . On the complete moment convergence of moving average processes generated by negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2024, 9(2): 3369-3385. doi: 10.3934/math.2024165
    [10] Mingzhou Xu . Complete convergence of moving average processes produced by negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(7): 17067-17080. doi: 10.3934/math.2023871
  • In this paper, we study the complete convergence and the complete integration convergence for weighted sums of m-extended negatively dependent (m-END) random variables under sub-linear expectations space with the condition of ˆE|X|pCV(|X|p)<, p>1/α and α>3/2. We obtain the results that can be regarded as the extensions of complete convergence and complete moment convergence under classical probability space. In addition, the Marcinkiewicz-Zygmund type strong law of large numbers for weighted sums of m-END random variables under the sub-linear expectations space is proved.



    In the era of information modernization, limit theorems are widely used in real-life economics, information, and risk measurement. Limit theory of classical probability space considers that additive probability and additive expectation, which is suitable for the condition of model certainty. But the problems of financial and economic have different degrees of uncertainty. In order to analyze and calculate the problems under uncertainty, Peng [1,2] came up with a new conception of the sub-linear expectations, and constructed the basic structure of the sub-linear expectations. Sub-linear expectations relaxes the additivity of probability and expectation of the classical probability. Hence, the theory of sub-linear expectations is more complex and challenging. Under the sub-linear expectations, Peng [3] established the central limit theorem. Enlightened by Peng's main articles, many researchers try to explore the results of sub-linear expectations. Chen and Gan [4] obtained the limiting behavior of weighted sums of independent and identically distributed sequences. Hu and Zhou [5] mainly demonstrated some multi-dimensional central limit theorems and laws of large numbers. Zhang [6,7,8] gained a series of important inequalities under sub-linear expectations. In addition, Zhang and Lin [9] also studied the Kolmogorov's strong law of large numbers. Lan and Zhang [10] proved the several moment inequalities, including Bernstein's inequalities, Kolmogorov's inequalities and Rademacher's inequalities. Guo and Zhang [11] obtained moderate deviation principle for m-dependent random variables under the sub-linear expectation.

    In 1947, the notion of complete convergence was raised by Hsu and Robbins [12] as follows. Let {Xn,n1} be a sequence of independent and identically distributed random variables in a probability space (Ω,F,P) with EX1=0 and EX21<, Sn=nk=1Xk,

    n=1P(|Sn|>nε)<,forallε>0.

    In 1988, Chow [13] established the complete moment convergence. The complete moment convergence is stronger than the complete convergence. In the classical probability space, the complete convergence and the complete moment convergence for different sequences have been relatively mature. For example, Yu et al. [14] proved the complete convergence for weighted sums of arrays of rowwise m-END random variables. Wu et al. [16,17] and Wang et al. [18] did a series of studies about extended negatively dependent (END) random variables. Meng et al. [15] and Ding et al. [19] respectively demonstrated the complete convergence and the complete moment convergence for END random variables and widely orthant dependent (WOD) random variables. Based on the basic framework of sub-linear expectations, researchers extended the theories and properties of classical probability space to the sub-linear expectations. For instance, Feng et al. [20] researched the complete convergence and the complete moment convergence for weighted sums of arrays of rowwise negatively dependent (ND) random variables. Zhong and Wu [21], Jia and Wu [22], Lu and Meng [23], their recent papers had new results about complete convergence and complete integral convergence.

    This paper aims to prove the complete convergence and the complete integral convergence for weighted sums of arrays of rowwise m-END under sub-linear expectations space. The rest of the paper is organized as follows. In section 2, we generally recall some basic notations and definitions, related properties under sub-linear expectations and preliminary lemmas that are useful to prove the main theorems. In section 3, the complete convergence, complete integral convergence and Marcinkiewicz-Zygmund type strong law of large numbers under sub-linear expectations space are established. In the last section, the proofs of these theorems are stated.

    We use the framework and notions of Peng [1,2]. Let (Ω,F) be a given measurable space and H be a linear space of real functions defined on (Ω,F) such that if X1,X2,,XnH then φ(X1,,Xn)H for each φCl,Lip(Rn), where Cl,Lip(Rn) denotes the linear space of (local Lipschitz) functions φ satisfying

    |φ(x)φ(y)|c(1+|x|m+|y|m)|xy|,x,yRn,

    for some c>0, mN depending on φ. H is considered as a space of random variables. In this case we denote XH.

    Definition 2.1. A sub-linear expectation ˆE on H is a function ˆE:HˉR satisfying the following properties: for all X,YH, we have

    (a) Monotonicity: if XY then ˆE[X]ˆE[Y];

    (b) Constant preserving: ˆE[c]=c;

    (c) Sub-additivity: ˆE[X+Y]ˆE[X]+ˆE[Y];

    (d) Positive homogeneity: ˆE[λX]=λˆE[X], λ0.

    Here ˉR=[,]. The triple (Ω,H,ˆE) is called a sub-linear expectation space. Given a sub-linear expectation ˆE, let us denote the conjugate expectation ˆε of ˆE by

    ˆε[X]=ˆE[X],XH.

    From the definition, it is easily shown that for all X,YH

    ˆε[X]ˆE[X],ˆE[X+c]=ˆE[X]+c,|ˆE[XY]|ˆE|XY|,ˆE[XY]ˆE[X]ˆE[Y].

    Definition 2.2. Let GF, a function V:G[0,1] is called a capacity if

    (1) V(Φ)=0, V(Ω)=1;

    (2) V(A)V(B), AB, A,BG.

    It is called to be sub-additive if A,BG, ABG, V(AB)V(A)+V(B).

    V(A)=inf{ˆE[ξ]:I(A)ξ,ξH},V(A)=1V(Ac),AF,

    where Ac is the complement set of A. It is obvious that V is sub-additive and

    V(A)V(A),AF,
    V(A):=ˆE[IA],V(A):=ˆε[IA],ifIAH,
    ˆE[f]V(A)ˆE[g],ˆε[f]V(A)ˆε[g],iffIAg,f,gH.

    For all XH, p>0 and x>0,

    I(|X|>x)|X|pxpI(|X|>x)|X|pxp.

    Definition 2.3. We define the Choquet integrals (CV,CV) by

    CV[X]=0V(Xt)dt+0[V(Xt)1]dt,

    with V being replaced by V and V respectively.

    Definition 2.4. [3] (Identical distribution) Let X1 and X2 be two n-dimensional random vectors defined respectively in the sub-linear expectations spaces (Ω1,H1,ˆE1) and (Ω2,H2,ˆE2). They are called identically distributed, denoted by X1d=X2 if

    ˆE1(φ(X1))=ˆE2(φ(X2)),φCl,Lip(R),

    whenever the sub-linear expectations are finite. A sequence {Xn,n1} of random variables is said to be identically distributed if Xid=X1 for each i1.

    Definition 2.5. [7] (END) In a sub-linear expectation space (Ω,H,ˆE), random variables {Xn,n1} are called to be upper (resp. lower) extended negatively dependent if there is some dominating constant K1 such that

    ˆE(ni=1φi(Xi))Kni=1ˆE(φi(Xi)),n1,

    whenever the non-negative functions φiCl,Lip(R),i=1,2, are all non-decreasing (resp. all non-increasing). They are called END if they are both upper extended negatively dependent and lower extended negatively dependent.

    Definition 2.6. (m-END) Let m1 be a fixed positive integer. In a sub-linear expectation space (Ω,H,ˆE), random variables {Xn,n1} is said to be m-END if for any n2 and any i1,i2,,in such that |ikij|m for all 1kjn, we have that Xi1,Xi2,,Xin are END, i.e.

    ˆE(nk=1φk(Xik))Knk=1ˆE(φk(Xik)),n1,|ikij|m,1kjn,

    where K1 is some dominating constant, the non-negative functions φiCl,Lip(R),i=1,2, are all non-decreasing or non-increasing. An array of random variables {Xni,n1,i1} is called rowwise m-END random variables if for every n1, {Xni,i1} is a sequence of m-END random variables, with a dominating sequence {Kn1}.

    It is distinct that if {Xn,n1} is a sequence of m-END random variables and f1(x),f2(x),Cl,Lip(R) are all non-decreasing (or non-increasing), then {fn(Xn),n1} is also a sequence of m-END random variables.

    In the following, let {Xn,n1} be a sequence of random variables in (Ω,H,ˆE). The symbol C is on behalf of a generic positive constant which may differ from one place to another; I() denote an indicator function. The following five lemmas are needed in the proofs of our theorems.

    Lemma 2.1. [20] (i) Markov inequality: for all XH,

    V(|X|x)ˆE(|X|p)/xp,x>0,p>0.

    (ii) H¨older inequality: for all X,YH and p,q>1 satisfying p1+q1 = 1,

    ˆE(|XY|)(ˆE(|X|p))1/p(ˆE(|Y|q))1/q.

    (iii) Jensen inequality: for all XH and 0<r<s,

    (ˆE(|X|r))1/r(ˆE(|X|s))1/s.

    Lemma 2.2. [21] (i) Suppose XH,α>0,p>0, for any c>0,

    CV(|X|p)<n=1nαp1V(|X|>cnα)<. (2.1)

    (ii) If CV(|X|p)<, then for any θ>1 and c>0,

    k=1θkαpV(|X|>cθkα)<. (2.2)

    Lemma 2.3. [7] (Rosenthal's inequalities) Let {Xn,n1} be a sequence of END random variables in (Ω,H,ˆE) with ˆEXk0. And set Sn=nk=1Xk,Bn=nk=1ˆEX2k,Mn,p=nk=1ˆE|Xk|p. For any p2 and for all x>0, then

    V(Snx)(1+Ke)Bnx2, (2.3)

    there K is some dominating constant and exists a constant Cp1, such that for all x>0 and 0<δ1,

    V(Snx)Cpδ2pKMn,pxp+Kexp{x22Bn(1+δ)}. (2.4)

    With Lemma 2.3 in hand, we can get the following Rosenthal's inequalities for m-END random variables.

    Lemma 2.4. (Rosenthal's inequalities) Let {Xn,n1} be a sequence of m-END random variables in (Ω,H,ˆE) with ˆEXk0. And set Sn=nk=1Xk,Bn=nk=1ˆEX2k,Mn,p=nk=1ˆE|Xk|p. For any p2 and for all x>0, then

    V(Snx)m2(1+Ke)Bnx2, (2.5)

    there K is some dominating constant and exists a constant Cp1, such that for all x>0 and 0<δ1,

    V(Snx)Cpδ2pmpKMn,pxp+mKexp{x22m2Bn(1+δ)}. (2.6)

    Proof. Let r=[nm], define

    Xi={Xi1in;0i>n.

    Note that Smr+j=ri=0Xmi+j, j=1,2,,m, then

    Sn=mj=1ri=0Xmi+j=mj=1Smr+j,

    for all x>0 and nm,

    (Snx)(Smr+1xm)(Smr+mxm)=mj=1(Smr+jxm). (2.7)

    It follows by the definition of m-END random variables that Xj,Xm+j,,Xmr+j are END random variables for each j=1,2,,m. Hence, by (2.3) and (2.7) that for all x>0 and nm, we have

    V(Snx)V(mj=1(Smr+jxm))mj=1V(Smr+jxm)mj=1(1+Ke)ri=0ˆE(Xmi+j)2(xm)2=m2(1+Ke)Bnx2,

    which implies (2.5).

    By (2.4) and (2.7) that for all x>0,nm and p2, we get

    V(Snx)mj=1V(Smr+jxm)mj=1(Cpδ2pKri=0ˆE|Xmi+j|p(xm)p+Kexp{x22m2ri=0ˆE(Xmi+j)2(1+δ)})Cpδ2pKmpMn,pxp+mKexp{x22m2Bn(1+δ)},

    which implies (2.6).

    This finishes the proof of Lemma 2.4.

    Lemma 2.5. [7] (Borel-Cantelli Lemma) {An,n1} is a sequence of events in F. Suppose that V is a countably sub-additive capacity. If n=1V(An)<, then V(An,i.o.)=0, where{An,i.o.}=n=1i=nAi.

    Theorem 3.1. Let {X,Xni,n1,1in} be an array of rowwise m-END and identically distributed random variables under sub-linear expectations. {ˆE(Xni)=ˆε(Xni)=0 and} {ani,n1,1in} is an array of real numbers, suppose α>3/2, p>1/α, and q>max{2,p},

    ni=1|ani|q=O(n), (3.1)

    and

    ˆE|X|pCV(|X|p)<, (3.2)

    then for any ε>0,

    n=1nαp2V{|ni=1aniXni|>εnα}<. (3.3)

    Theorem 3.2. Suppose that the conditions of Theorem 3.1 hold, and 0<r<p, then for any ε>0,

    n=1nαp2CV{|ni=1aniXni|εnα}r+<. (3.4)

    Theorem 3.3. Suppose that the conditions of Theorem 3.1 hold, and αp=2, then for any ε>0,

    n2/pni=1aniXni0,a.s.V,n. (3.5)

    Remark 3.1. Theorems 3.1 and Theorem 3.3 extend the corresponding results of Yu et al. [14] from the classical probability space to sub-linear expectations space.

    Remark 3.2. Under sub-linear expectations, the main purpose of our paper is to improve the result of Zhong and Wu [21] from END random variables to arrays of rowwise m-END random variables, and extend the range of p.

    Remark 3.3. According to Definition 2.6, we can see that if m=1, then the concept of m-END random variables reduces to END random variables under sub-linear expectations. Hence, the concept of m-END random variables is a natural extension of END random variables, m-END random variables include END random variables and ND random variables. So Theorem 3.1, Theorem 3.2 and Theorem 3.3 also hold for the arrays of END random variables and ND random variables under sub-linear expectations.

    Proof of Theorem 3.1. According to

    ni=1aniXni=ni=1a+niXnini=1aniXni,

    then for any ε>0,

    n=1V{|ni=1aniXni|>εnα}n=1V{|ni=1a+niXni|>εnα2}+n=1V{|ni=1aniXni|>εnα2}. (4.1)

    Without loss of generality, we can assume ani0 for all n1 and 1in, which implies that

    n=1nαp2V{ni=1aniXni>εnα}<,ε>0. (4.2)

    Because of considering {Xni,n1,i1} still satisfies the conditions in Theorem 3.1, we have

    n=1nαp2V{ni=1aniXni<εnα}<,ε>0. (4.3)

    Hence, we can imply (3.3) by (4.2) and (4.3).

    In the following, we prove (4.2). For all n1 and 1in, denote that

    Xni=nαI(Xni<nα)+XniI(|Xni|nα)+nαI(Xni>nα),Xni=XniXni=(Xni+nα)I(Xni<nα)+(Xninα)I(Xni>nα). (4.4)

    By Definition 2.6, we know that {Xni,n1,1in} and {aniXni,n1,1in} are still arrays of rowwise m-END random variables. For any 0<βq, by H¨older inequality and (3.1), we obtain that

    (ni=1aβni)(ni=1aqni)βq(ni=11)1βqCn. (4.5)

    For any ε>0,

    {ni=1aniXni>εnα}{ni=1(|Xni|>nα)}{ni=1aniXni>εnα},

    it is easy to see that

    n=1nαp2V(ni=1aniXni>εnα)n=1nαp2V{ni=1(|Xni|>nα)(ni=1aniXni>εnα)}n=1nαp2ni=1V(|Xni|>nα)+n=1nαp2V(ni=1aniXni>εnα)H1+H2.

    Hence, we need to prove H1< and H2<.

    For 0<μ<1, let g(x) be a decreasing function when x0 and g(x)Cl,Lip(R), 0g(x)1 for all xR, g(x)=1, if |x|μ; g(x)=0 if |x|>1. Then

    I(|x|μ)g(|x|)I(|x|1),I(|x|>1)1g(|x|)I(|x|>μ). (4.6)

    By (4.6) and Lemma 2.2 (2.1),

    H1n=1nαp2ni=1ˆE(1g(|Xni|nα))=n=1nαp1ˆE(1g(|X|nα))n=1nαp1V(|X|>μnα)<.

    Next we estimate H2<. For any q>0, by cr inequality, (4.4) and (4.6), which implies that

    |Xni|q|Xni|qI(|Xni|nα)+nαqI(|Xni|>nα)|Xni|qg(μ|Xni|nα)+nαq(1g(|Xni|nα)),

    furthermore,

    ˆE|Xni|qˆE(|X|qg(μ|X|nα))+nαqˆE(1g(|X|nα))ˆE(|X|qg(μ|X|nα))+nαqV(|X|>μnα). (4.7)

    Case A1: 0<p<1.

    {By} (4.5), (4.7), Markov inequality and αp>1, we get

    nα|ni=1aniˆEXni|nαni=1aniˆE|Xni|nαni=1aniˆE(|Xni|g(μ|Xni|nα))+ni=1aniˆE(1g(|Xni|nα))n1αˆE|X|I(|X|1μnα)+nV(|X|>μnα)Cn1αpˆE|X|p0,n.

    Case A2: p1.

    {By} (4.5), ˆEXni=0 and αp>1, one can get that

    nα|ni=1aniˆEXni|nαni=1aniˆE|XniXni|=nαni=1aniˆE|Xni|nαni=1aniˆE[(|Xni|nα)I(|Xni|>nα)]nαni=1aniˆE[|Xni|(1g(|Xni|nα))]Cn1αˆE[|X|(1g(|X|nα))]Cn1αpˆE|X|p0,n.

    It follows that for all n large enough,

    nα|ni=1aniˆEXni|<ε2,

    which implies that

    H2Cn=1nαp2V{ni=1ani(XniˆEXni)>εnα2}H3.

    By Definition 2.6, we know that {ani(XniˆEXni),n1,1in} are still arrays of rowwise m-END random variables, and ˆE(ani(XniˆEXni))=0. In order to prove H2<, we need to show H3<.

    Case B1: p<2.

    By cr inequality, Jensen inequality, and (2.5) in Lemma 2.4, combine with (4.5), (4.9), (4.10) and (4.13), we get

    H3Cn=1nαp2(4(1+Ke))m2ni=1ˆE(ani(XniˆEXni))2(εnα)2Cn=1nαp22αni=1ˆE(ani(XniˆEXni))2Cn=1nαp22αni=1a2niˆE(Xni)2Cn=1nαp12α[ˆE(|X|2g(μ|X|nα))+n2αV(|X|>μnα)]Cn=1nαp12αˆE(|X|2g(μ|X|nα))+Cn=1nαp1V(|X|>μnα)H31+H32.

    By (2.1), which implies that H32<. Next we prove H31<.

    For 0<μ<1, let gk(x)Cl,Lip(R), k1 such that 0gk(x)1 for all xR, and gk(x2kα)=1 if 2(k1)α<|X|2kα; gk(x2kα)=0 if |x|μ2(k1)α or |x|>(1+μ)2kα. Then

    gk(|X|2kα)I(μ2(k1)α<|X|(1+μ)2kα),|X|lg(|X|2jα)1+jk=1|X|lgk(|X|2kα),l>0. (4.8)

    By (4.8) and g(x) is a decreasing function if x0,

    H31Cj=12j+11n=2jnαp2α1ˆE(X2g(μ|X|nα))Cj=12(αp2α1)j2jˆE(X2g(μ|X|2α(j+1)))Cj=12α(p2)jˆE(1+jk=1X2gk(μ|X|2(k+1)α))Cj=12α(p2)j+Cj=12α(p2)jjk=1ˆE(X2gk(μ|X|2α(k+1)))H311+H312. (4.9)

    By p<2, we obtain that H311<. For H312, by (4.8) and (2.2) in Lemma 2.2, we get

    H312Ck=1j=k2α(p2)jˆE(X2gk(μ|X|2α(k+1)))Ck=12αpkV(|X|>2αk)<. (4.10)

    Case B2: p2.

    By q>p2 and nm, δ=1 and (2.6) in Lemma 2.4, we have

    H3n=1nαp2Cpδ2pmpKni=1ˆE|ani(XniˆEXni)|q(εnα)q+n=1nαp2mKexp{(εnα)28m2ni=1ˆE(ani(XniˆEXni))2(1+δ)}Cn=1nαp2ni=1aqniˆE|(XniˆEXni)|q(εnα)q+Cn=1nαp2exp{(εnα)216m2ni=1a2niˆE(XniˆEXni)2}Cn=1nαp2αqni=1aqniˆE|XniˆEXni|q+Cn=1nαp2exp{(εnα)216n2ni=1a2niˆE(XniˆEXni)2}I1+I2.

    Next we establish that I1< and I2<. For I1, by ˆE|X|p<, cr inequality, Jensen inequality and (4.7), we have that

    I1Cn=1nαp2αqni=1aqniˆE|Xni|qCn=1nαp2αqni=1aqni(ˆE|X|qg(|X|nα)+nαqV(|X|>μnα))Ci=12i1n<2inαpαq1ˆE(|X|qg(μ|X|nα))+Cn=1nαp1V(|X|>μnα)I11+I12. (4.11)

    By (2.1), it is obvious that that I12<. We only need to prove I11<. By (2.1) and (4.8), it is easy to prove that

    I11Ci=12i(αpαq)ˆE(|X|qg(μ|X|2iα))Ci=12i(αpαq)+Ci=12i(αpαq)ik=1ˆE(|X|qgk(μ|X|2kα))Ck=1i=k2i(αpαq)ˆE(|X|qgk(μ|X|2kα))Ck=12kαpV(|X|>c2kα)<. (4.12)

    For α>3/2, 2α3>0, which implies that for all n large enough,

    ε216n2α3αplnn.

    By (3.2), we can imply that

    I2Cn=1nαp2exp{ε216n2α3}Cn=1nαp2exp{lnnαp}Cn=1n2<.

    Hence H2<. This finishes the proof of Theorem 3.1.

    Proof of Theorem 3.2. Without loss of generality, assume ani0 for all n1 and 1in. For any ε>0, by Theorem 3.1 we have that

    n=1nαpαr2CV{|ni=1aniXni|εnα}r+=n=1nαpαr20V(|ni=1aniXni|εnα>x1/r)dx=n=1nαpαr2nαr0V(|ni=1aniXni|εnα>x1/r)dx+n=1nαpαr2nαrV(|ni=1aniXni|εnα>x1/r)dxn=1nαp2V(|ni=1aniXni|>εnα)+n=1nαpαr2nαrV(|ni=1aniXni|>x1/r)dxn=1nαpαr2nαrV(ni=1aniXni>x1/r)dxJ.

    Hence, it suffices to show that J<.

    For all n1 and 1in, denote that

    Yni=x1/rI(Xni<x1/r)+XniI(|Xni|x1/r)+x1/rI(Xni>x1/r),Yni=(Xni+x1/r)I(Xni<x1/r)+(Xnix1/r)I(Xni>x1/r),

    then

    Jn=1nαpαr2nαrni=1V(|Xni|>x1/r)dx+n=1nαpαr2nαrV(ni=1aniYni>x1/r)dxn=1nαpαr2nαrni=1V(|Xni|>x1/r)dx+n=1nαpαr2nαrV(ni=1ani(YniˆE(Yni))>x1/r|ni=1aniˆE(Yni)|)dxJ1+J2.

    In order to estimate J<, we only to show that J1< and J2<. Thus by (4.5), (2.1) in Lemma 2.2 and g(x) is a decreasing function when x0, we get

    J1n=1nαpαr2nαrni=1ˆE(1g(|Xni|x1/r))dx=n=1nαpαr1nαrˆE(1g(|X|x1/r))dx=n=1nαpαr1m=n(m+1)αrmαrˆE(1g(|X|x1/r))dxn=1nαpαr1m=n[(m+1)αrmαr]ˆE(1g(|X|mα))m=1mαr1V(|X|>μmα)mn=1nαpαr1m=1mαp1V(|X|>μmα)<.

    Next we prove J2<. By (4.5) and cr inequality, for all γ>0

    ˆE|Yni|γˆE|X|γg(μ|X|x1/r)+xγ/rˆE(1g(|X|x1/r))ˆE(|X|γg(μ|X|x1/r))+xγ/rV(|X|>μx1/r). (4.13)

    Case C1: p1.

    By (4.5), ˆEXni=0 and αp>1, it is sufficient to see that

    supxnαrx1/r|ni=1aniˆEYni|supxnαrx1/rni=1aniˆE|XniYni|supxnαrx1/rni=1aniˆE|Yni|=supxnαrx1/rni=1aniˆE[(|Xni|x1/r)I(|Xni|>x1/r)]nαni=1aniˆE[|Xni|I(|Xni|>nα)]nαni=1aniˆE[|Xni|(1g(|Xni|nα))]Cn1αˆE[|X|(1g(|X|nα))]Cn1αpˆE|X|p0,n.

    Case C2: 0<p<1.

    {By} (4.5), (4.13), Markov inequality and αp>1, we show that

    supxnαrx1/r|ni=1aniˆEYni|supxnαrx1/rni=1aniˆE|Yni|supxnαrx1/rni=1aniˆE(|Xni|g(μ|Xni|x1/r))+supxnαrx1/rni=1anix1/rˆE(1g(|Xni|x1/r))supxnαrx1/rnˆE|X|I(|X|1μx1/r)+supxnαrnV(|X|>μx1/r)Cn1αpˆE|X|p+nV(|X|>μnα)Cn1αpˆE|X|p0,n.

    Hence, it follows that for all n large enough,

    supxnαrx1/r|ni=1aniˆEYni|<12,

    which implies that

    J2n=1nαpαr2nαrV(ni=1ani(YniˆE(Yni))>x1/r2)dxJ3.

    By Definition 2.6, we know that {ani(YniˆEYni),n1,1in} are still arrays of rowwise m-END random variables, and ˆE(ani(YniˆEYni))=0. In order to prove J2<, we have to show J3<.

    Case D1: p<2.

    By cr inequality, Jensen inequality, and (2.5) in Lemma 2.4, combine with (4.5), (4.9), (4.10) and (4.13) that

    J3n=1nαpαr2(4(1+Ke))m2nαrni=1ˆE(ani(YniˆEYni))2x2/rdxCn=1nαpαr2nαrx2/rni=1a2niˆE(Yni)2dxCn=1nαpαr1nαrx2/rˆE(|X|2g(μ|X|x1/r))dx+n=1nαpαr1nαrˆE(1g(μ|X|x1/r))dxCn=1nαpαr1k=n(k+1)αrkαrx2/rˆE(|X|2g(μ|X|x1/r))dxCn=1nαpαr1k=nkαr12αˆE(|X|2g(μ|X|kα))Ck=1kαr12αˆE(|X|2g(μ|X|kα))kn=1nαpαr1Ck=1kαr12αˆE(|X|2g(μ|X|kα))kαpαrCk=1kαr12αˆE(|X|2g(μ|X|kα))<.

    Case D2: p2.

    For q>p2 and nm, by (2.6) in Lemma 2.4, cr inequality and Jensen inequality, let δ=1, we have

    J3Cn=1nαpαr2nαrni=1ˆE|ani(YniˆEYni)|qxq/rdx+Cn=1nαpαr2nαrexp{x2/r8m2ni=1ˆE(ani(YniˆEYni))2(1+δ)}dxCn=1nαpαr2nαrxq/rni=1aqniˆE|YniˆEYni|qdx+Cn=1nαpαr2nαrexp{x2/r16m2ni=1a2niˆE(YniˆEYni)2}dxCn=1nαpαr2nαrxq/rni=1aqniˆE|Yni|qdx+Cn=1nαpαr2nαrexp{x2/r16m2ni=1a2niˆE(YniˆEYni)2}dxJ31+J32.

    Next we prove J31< and J32<. By cr inequality, Jensen inequality, and (2.5), combine with (4.5), (4.11), (4.12) and (4.13), then

    J31Cn=1nαpαr1nαrxq/r(ˆE(|X|qg(μ|X|x1/r))+xq/rˆE(1g(μ|X|x1/r)))dxCn=1nαpαr1nαrxq/rˆE(|X|qg(μ|X|x1/r))dx+Cn=1nαpαr1nαrˆE(1g(μ|X|x1/r))dxCn=1nαpαr1k=n(k+1)αrkαrxq/rˆE(|X|qg(μ|X|x1/r))dxCn=1nαpαr1k=nkαr1αqˆE(|X|qg(μ|X|kα))Ck=1kαr1αqˆE(|X|qg(μ|X|kα))kn=1nαpαr1Ck=1kαr1αqˆE(|X|qg(μ|X|kα))kαpαrCk=1kαr12αˆE(|X|qg(μ|X|kα))<.

    Let β>max{αp12α3, r2}, and 2βr>1, 2αp+(2α3)β>1, it follows that all s large enough, es>sβ, take x=nαrt, noting that by (3.2),

    J32Cn=1nαpαr2nαrexp{x2/r16n3ˆE(YniˆEYni)2}dxCn=1nαpαr2nαrexp{x2/rn3}dxCn=1nαpαr2nαr1exp{n2αt2/rn3}dtCn=1nαpαr2nαr1(n2αt2/rn3)βdtCn=1nαp2(2α3)β11t2β/rdtCn=11n2αp+(2α3)β<.

    By ˆE(Xni)=ˆε(Xni)=0, {Xni,n1,i1} also satisfies the conditions of Theorem 3.2, we obtain

    n=1nαpαr2nαrV(ni=1aniXni<x1/r)dx<.

    Hence, the proof of Theorem 3.2 is finished.

    Proof of Theorem 3.3. Take αp=2 in Theorem 3.1, we get

    n=1V{|ni=1aniXni|>εnα}<.

    By Lemma 2.5, then

    V{|ni=1aniXni|>εnα,i.o.}=0,

    and

    V{m=1n=m(|ni=1aniXni|εnα)}=1,

    furthermore,

    V{m=1n=m(|ni=1aniXni|εnα)}=1.

    Then

    (m=1n=m(|ni=1aniXni|εnα))(nαni=1aniXni0).

    When α=2/p, we have

    V{(n2/pni=1aniXni)0}=1.

    Above all, the proof of Theorem 3.3 is completed.

    This paper was supported by the Department of Science and Technology of Jilin Province (Grant No. YDZJ202101ZYTS156), and Graduate Innovation Project of Beihua University (2021003).

    All authors declare no conflict of interest in this paper.



    [1] S. G. Peng, G-Expectation, G-Brownian motion and related stochastic calculus of Ito's type, Stoch. Anal. Appl., 2 (2006), 541–567. http://dx.doi.org/10.1007/978-3-540-70847-6_25 doi: 10.1007/978-3-540-70847-6_25
    [2] S. G. Peng, Multi-dimensional G-Brownian motion and related stochastic calculus under G-expectation, Stoch. Proc. Appl., 118 (2008), 2223–2253. http://dx.doi.org/10.1016/j.spa.2007.10.015 doi: 10.1016/j.spa.2007.10.015
    [3] S. G. Peng, A new central limit theorem under sublinear expectations, arXiv: 0803.2656, 2008.
    [4] P. Y. Chen, S. X. Gan, Limiting behavior of weighted sums of i.i.d. random variables, Statist. Probab. Lett., 77 (2007), 1589–1599. http://dx.doi.org/10.1016/j.spl.2007.03.038 doi: 10.1016/j.spl.2007.03.038
    [5] Z. C. Hu, L. Zhou, Multi-dimensional central limit theorems and laws of large numbers under sublinear expectations, Acta Math. Sci. Ser. B (Engl. Ed.), 31 (2015), 305–318. http://dx.doi.org/10.1007/s10114-015-3212-1 doi: 10.1007/s10114-015-3212-1
    [6] L. X. Zhang, Strong limit theorems for extended independent random variables and extended negatively dependent random variables under sub-linear expectations, Acta Math. Sci. Ser. B (Engl. Ed.), 42 (2022), 467–490. http://dx.doi.org/10.1007/sl0473-022-0203-z doi: 10.1007/sl0473-022-0203-z
    [7] L. X. Zhang, Exponential inequalities under the sub-linear expectations with applications to laws of the iterated logarithm, Sci. China-Math., 59 (2016), 2503–2526. http://dx.doi.org/10.1007/s11425-016-0079-1 doi: 10.1007/s11425-016-0079-1
    [8] L. X. Zhang, Rosenthal's inequalities for independent and negatively dependent random variables under sub-linear expectations with applications, Sci. China-Math., 59 (2016), 751–768. http://dx.doi.org/10.1007/S11425-015-5105-2 doi: 10.1007/S11425-015-5105-2
    [9] L. X. Zhang, J. H. Lin, Marcinkiewicz's strong law of large numbers for nonlinear expectations, Statist. Probab. Lett., 137 (2018), 269–276. http://dx.doi.org/10.48550/arXiv.1703.00604 doi: 10.48550/arXiv.1703.00604
    [10] Y. T. Lan, N. Zhang, Severral moment inequalities under sublinear expectations, Acta Math. Appl. Sinica, 41 (2018), 229–248. http://dx.doi.org/10.12387/C2018018 doi: 10.12387/C2018018
    [11] S. Guo, Y. Zhang, Moderate deviation principle for m-dependent random variables under the sub-linear expectation, AIMS Math., 7 (2022), 5943–5956. http://dx.doi.org/10.3934/math.2022331 doi: 10.3934/math.2022331
    [12] P. L. Hsu, H. Robbins, Complete convergence and the law of large numbers, P. Natl. A. Sci. USA, 33 (1947), 25–31. http://dx.doi.org/10.1073/pnas.33.2.25 doi: 10.1073/pnas.33.2.25
    [13] Y. S. Chow, On the rate of moment complete convergence of sample sums and extremes, Bull. Inst. Math. Acad. Sinica, 16 (1988), 177–201.
    [14] Q. H. Yu, M. M. Ning, M. Pan, A. T. Shen, Complete convergence for weighted sums of arrays of rowwise m-END random variables, J. Hebei Norm. Univ. Nat. Sci. Ed., 40 (2018), 333–338. http://dx.doi.org/10.3969/j.issn.1000-2375.2018.04.003 doi: 10.3969/j.issn.1000-2375.2018.04.003
    [15] B. Meng, D. C. Wang, Q. Y. Wu, Complete convergence and complete moment convergence for weighted sums of extended negatively dependent random variables, Comm. Statist. Theory Methods, 51 (2022), 1–14. https://doi.org/10.1080/03610926.2020.1804587 doi: 10.1080/03610926.2020.1804587
    [16] Y. F. Wu, O. C. Munuel, V. Andrei, Complete convergence and complete moment convergence for arrays of rowwise END random variables, Glas. Mat. Ser. III, 51 (2022). http://dx.doi.org/10.3336/gm.49.2.16
    [17] Y. F. Wu, M. Guan, Convergence properties of the partial sums for sequences of END random Variables, J. Korean Math. Soc., 49 (2012), 1097–1110. http://dx.doi.org/10.4134/jkms.2012.49.6.1097 doi: 10.4134/jkms.2012.49.6.1097
    [18] X. J. Wang, X. Q. Li, S. H. Hu, X. H. Wang, On complete convergence for an extended negatively dependent sequence, Comm. Statist. Theory Methods, 43 (2014), 2923–2937. http://dx.doi.org/10.1080/03610926.2012.690489 doi: 10.1080/03610926.2012.690489
    [19] Y. Ding, Y. Wu, S. L. Ma, X. R. Tao, X. J. Wang, Complete convergence and complete moment convergence for widely orthant-dependent random variables, Comm. Statist. Theory Methods, 46 (2017), 8278–8294. http://dx.doi.org/10.1080/03610926.2016.1177085 doi: 10.1080/03610926.2016.1177085
    [20] F. X. Feng, D. C. Wang, Q. Y. Wu, H. W. Huang, Complete and complete moment convergence for weighted sums of arrays of rowwise negatively dependent random variables under the sub-linear expectations, Comm. Statist. Theory Methods, 50 (2021), 594–608. https://doi.org/10.1080/03610926.2019.1639747
    [21] H. Y. Zhong, Q. Y. Wu, Complete convergence and complete moment convergence for weighted sums of extended negatively dependent random variables under sub-linear expectation, J. Inequal. Appl., 2017 (2017), 1–14. http://dx.doi.org/10.1186/s13660-017-1538-1 { doi: 10.1186/s13660-017-1538-1
    [22] C. C. Jia, Q. Y. Wu, Complete convergence and complete integral convergence for weighted sums of widely acceptable random variables under the sub-linear expectations, AIMS Math., 7 (2022), 8430–8448. http://dx.doi.org/10.3934/math.2022470 doi: 10.3934/math.2022470
    [23] D. W. Lu, Y. Meng, Complete and complete integral convergence for arrays of row wise widely negative dependent random variables under the sub-linear expectations, Comm. Statist. Theory Methods, 51 (2020), 1–14. http://dx.doi.org/10.1080/03610926.2020.1786585 doi: 10.1080/03610926.2020.1786585
  • This article has been cited by:

    1. TAN Xili, DONG He, SUN Peiyu, ZHANG Yong, Almost Sure Convergence of Weighted Sums for m-END Sequences under Sub-linear Expectations, 2024, 1, 3006-0656, 10.59782/sidr.v1i1.26
    2. Peiyu Sun, Dehui Wang, Xili Tan, Equivalent Conditions of Complete p-th Moment Convergence for Weighted Sum of ND Random Variables under Sublinear Expectation Space, 2023, 11, 2227-7390, 3494, 10.3390/math11163494
    3. Qingfeng Wu, Xili Tan, Shuang Guo, Peiyu Sun, Strong law of large numbers for weighted sums of m-widely acceptable random variables under sub-linear expectation space, 2024, 9, 2473-6988, 29773, 10.3934/math.20241442
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1691) PDF downloads(73) Cited by(3)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog