Processing math: 100%
Research article

Complete integral convergence for weighted sums of negatively dependent random variables under sub-linear expectations

  • Received: 07 May 2023 Revised: 02 July 2023 Accepted: 10 July 2023 Published: 13 July 2023
  • MSC : 60F15

  • In the paper, the complete convergence and complete integral convergence for weighted sums of negatively dependent random variables under the sub-linear expectations are established. The results in the paper extend some complete moment convergence theorems from the classical probability space to the situation of sub-linear expectation space.

    Citation: Lunyi Liu, Qunying Wu. Complete integral convergence for weighted sums of negatively dependent random variables under sub-linear expectations[J]. AIMS Mathematics, 2023, 8(9): 22319-22337. doi: 10.3934/math.20231138

    Related Papers:

    [1] Shuyan Li, Qunying Wu . Complete integration convergence for arrays of rowwise extended negatively dependent random variables under the sub-linear expectations. AIMS Mathematics, 2021, 6(11): 12166-12181. doi: 10.3934/math.2021706
    [2] Mingzhou Xu . Complete convergence and complete moment convergence for maximal weighted sums of extended negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(8): 19442-19460. doi: 10.3934/math.2023992
    [3] He Dong, Xili Tan, Yong Zhang . Complete convergence and complete integration convergence for weighted sums of arrays of rowwise m-END under sub-linear expectations space. AIMS Mathematics, 2023, 8(3): 6705-6724. doi: 10.3934/math.2023340
    [4] Chengcheng Jia, Qunying Wu . Complete convergence and complete integral convergence for weighted sums of widely acceptable random variables under the sub-linear expectations. AIMS Mathematics, 2022, 7(5): 8430-8448. doi: 10.3934/math.2022470
    [5] Mingzhou Xu, Kun Cheng, Wangke Yu . Complete convergence for weighted sums of negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2022, 7(11): 19998-20019. doi: 10.3934/math.20221094
    [6] Mingzhou Xu, Xuhang Kong . Note on complete convergence and complete moment convergence for negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(4): 8504-8521. doi: 10.3934/math.2023428
    [7] Xiaocong Chen, Qunying Wu . Complete convergence and complete integral convergence of partial sums for moving average process under sub-linear expectations. AIMS Mathematics, 2022, 7(6): 9694-9715. doi: 10.3934/math.2022540
    [8] Mingzhou Xu . Complete convergence of moving average processes produced by negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(7): 17067-17080. doi: 10.3934/math.2023871
    [9] Mingzhou Xu . On the complete moment convergence of moving average processes generated by negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2024, 9(2): 3369-3385. doi: 10.3934/math.2024165
    [10] Haiwu Huang, Yuan Yuan, Hongguo Zeng . An extension on the rate of complete moment convergence for weighted sums of weakly dependent random variables. AIMS Mathematics, 2023, 8(1): 622-632. doi: 10.3934/math.2023029
  • In the paper, the complete convergence and complete integral convergence for weighted sums of negatively dependent random variables under the sub-linear expectations are established. The results in the paper extend some complete moment convergence theorems from the classical probability space to the situation of sub-linear expectation space.



    Probability limit theory is an important research topic in mathematical statistics that has found extensive application in the fields of mathematics, statistics, and finance. However, the limitations of classical limit theory have become increasingly apparent with the application of limit theory in finance, risk measurement and other areas. In situations where the mathematical model is characterized by uncertainty, the analysis and computation of sub-linearity becomes feasible. To address this issue, academician Peng [1,2,3] put forward the concept of sub-linear expectation space, constructed the complete theoretical system of sub-linear expectation space and effectively solved the limitation of traditional probability space theory in statistics, economics, and other fields. In recent years, an increasing number of scholars have conducted extensive research in this field, yielding numerous relevant findings. Notably, Peng [1,2,3] and Zhang [4,5,6] have derived a series of significant conclusions, including the law of large numbers of strong numbers, the exponential inequality and Rosenthal's inequality under sub-linear expectations. These findings have established a solid groundwork for investigating of the limit theory of sub-linear expectation spaces. The results obtained by Peng and Zhang have greatly contributed to the advancement of our understanding of the sub-linear expectation space theorem.

    The concepts of complete convergence and complete moment convergence hold significant importance in the probability limit theory. The theory of complete convergence was initially introduced by Hsu and Robbins [7]. Chow [8] introduced the concept of complete convergence of independent random variables, which has since been expanded upon. As a result of complete convergence, complete moment convergence is more accurate, prompting a further investigation by scholars. Qiu and Chen [9] established the complete moment convergence for independent and identically distributed random variables, while Yang and Hu [10] demonstrated the complete moment convergence for pairwise NQD random variables. Song and Zhu [11] derived the complete convergence theorem for extended negatively dependent random variables. Notably, in the sub-linear expectation space, the complete moment convergence is equivalent to the complete integral convergence. In recent years, an increasing number of scholars have conducted research on the topics of complete convergence and complete integral convergence within the context of sub-linear expectations, thereby significantly augmenting the associated theoretical frameworks. For example, Li and Wu [12] conducted a study on the convergence of complete integrals for arrays of row-wise extended negatively dependent random variables. Similarly, Lu and Weng [13] examined the complete and complete integral convergence of arrays consisting of row-wise widely negative dependent random variables. Additionally, Chen and Wu [14] investigated the complete convergence and complete integral convergence of partial sums for the moving average process. It is noteworthy that complete convergence and complete integral convergence with maxima under sub-linear expectation spaces are only valid when the sequences are independent or negatively dependent. For example, Feng and Zeng [15] proved a complete convergence theorem of the maximum of partial sums under the sub-linear expectations. Xu and Kong [16,17] discussed complete convergence and complete integral convergence under negatively dependent sequences. The aforementioned findings suggest a need for further development in the field of complete integral convergence. The objective of this research is to extend the complete moment convergence characteristic, as established by Wu and Wang [18], to sub-linear space through a probabilistic approach and, subsequently, derive relevant outcomes.

    The present article is structured as follows: Section 2 provides an introduction to basic notations, concepts and related properties within the context of sub-linear expectations, along with the presentation of several lemmas. Section 3 establishes complete convergence and complete integral convergence for weighted sums of negatively dependent random variables under sub-linear expectations. Finally, in Section 4, the aforementioned lemmas are utilized to demonstrate the major findings of this study. The symbol c denotes an arbitrary constant and is independent of n. The lnx is denoted as log2x in the paper and I() denotes an indicator function.

    We use the framework and notions of Peng [1,2,3] and Zhang [6]. Let (Ω,F) be a given measurable space and let H be a linear space of real functions defined on (Ω,F) such that if X1,X2,,XnH, then φ(X1,,Xn)H for each φCl, Lip (Rn), where φCl, Lip (Rn) denotes the linear space of (local Lipschitz) functions φ satisfying

    |φ(x)φ(y)|c(1+|x|m+|y|m)|xy|,x,yRn,

    for some c>0,mN depending on φ. H is considered as a space of random variable. In this case we denote XH.

    Definition 2.1. A sub-linear expectationis ˆE on H is a function ˆE:H[,+] satisfying the following properties: for all X,YH, we have

    (a) Monotonicity: if XY, then ˆE(X)ˆE(Y);

    (b) Constant preserving: ˆE(c)=c;

    (c) Sub-additivity: ˆE(X+Y)ˆE(X)+ˆE(Y);

    (d) Positive homogeneity: ˆE(λX)=λˆE(X),λ0.

    The triple (Ω,H,ˆE) is called a sub-linear expectation space.

    Given a sub-linear expectation ˆE, let us denote the conjugate expectation ˆε of ˆE b

    ˆε(X):=ˆE(X),XH.

    Form the definition, it is easily shown that for all X,YH,

    ˆε(X)ˆE(X),ˆE(XY)ˆE(X)ˆE(Y),ˆE(X+c)=ˆE(X)+c, (2.1)
    |ˆE(XY)|ˆE|XY|. (2.2)

    Definition 2.2. Let GF, a function V:G[0,1] is called a capacity, if

    V()=0,V(Ω)=1 and V(A)V(B) for AB,A,BG.

    It is called to be sub-additive if V(AB)V(A)+V(B) for all A,BG. In the sub-linear space (Ω,H,ˆE), we denote a pair (V,V) of capacities by

    V(A):=inf{ˆE[ξ]:IAξ,ξH},V(A):=1V(Ac),AF,

    where Ac is the complement set of A. It is obvious that V is sub-additive, and

    ˆEfV(A)ˆEg, ˆεfV(A)ˆεg, iffI(A)g,f,gH.

    This implies Markov inequality:

    V(|X|x)ˆE|X|P/xp,x>0,p>0.

    Form I(|X|x)|X|P/xPH, by Lemma 4.1 in Zhang [5], we have H¨older inequality:

    X,YH,p,q>1 satisfying p1+q1=1,

    ˆE(|XY|)(ˆE(|X|p))1/p(ˆE(|X|q))1/q,

    particularly, Jensen inequality: XH,

    (ˆE(|X|r))1/r(ˆE(|X|s))1/s for  0<rs.

    Definition 2.3. We define the Choquet integrals/expectations (CV,CV) by

    CV(X)=0V(Xt)dt+0[V(Xt)1]dt,

    with V being replaced by V and V respectively.

    Definition 2.4. (Identical distribution) Let X1 and X2 be two n-dimensional random vectors defined, respectively, in sub-linear expectation spaces (Ω1,H1,ˆE1) and (Ω2,H2,ˆE2). They are called identically distributed if

    ˆE1(φ(X1))=ˆE2(φ(X2)),φCl, Lip (Rn),

    whenever the sub-expectations are finite. A sequence {Xn,n1} of random variables is said to be identically distributed if, for each i1, Xi and X1 are identically distributed.

    Definition 2.5. (Negative dependence) In a sub-linear expectation space (Ω,H,ˆE), a random vector Y=(Y1,,Yn)(YiH) is said to be negatively dependent (ND) to another random vector X=(X1,,Xm)(XiH) under ˆE if for each pair of test functions φ1Cl, Lip (Rm) and φ2Cl, Lip (Rn), we have

    ˆE[φ1(X)φ2(Y)]ˆE[φ1(X)]ˆE[φ2(Y)],

    whenever φ1(X)0,ˆE[φ2(Y)]0,ˆE[|φ1(X)φ2(Y)|]<,ˆE[|φ1(X)|]<, ˆE[|φ2(Y)|]< and either φ1 and φ2 are coordinatewise non-increasing or coordinatewise non-decreasing.

    A sequence of random variables {Xn,n1} is said to be negatively dependent if Xi+1 is negatively dependent to (X1,...,Xi) for each i1.

    It is obvious that, if {Xn,n1} is a sequence of negatively dependent random variables and functions f1(x),f2(x),...Cl,Lip(R) are all non-decreasing (resp. all non-increasing), then {fn(Xn),n1} is also a sequence of negatively dependent random variables.

    Definition 2.6. A sub-linear expectation ˆE:HR is called to be countably sub-additive, if

    ˆE(X)n=1ˆE(Xn), where Xn=1Xn,X,XnH,X0,Xn0,n1.

    We need the following lemmas to prove the main results.

    Lemma 2.1. (Zhang [5]) Suppose that Xk is negatively dependent to (Xk+1,,Xn) for each k=1,,n1 and ˆE(Xk)0, then for q2,

    ˆE[maxkn|Sk|q]cq{nk=1ˆE[|Xk|q]+(nk=1ˆE[|Xk|2])q2+(nk=1[(ˆεXk)+(ˆEXk)+])q}, (2.3)

    where cq is a positive constant depending only on q.

    Lemma 2.2. Suppose XH, γ>0, 0<α2 and by=y1/αln1/γy.

    (i) Then, for any c>0,

    CV(|X|2ln12/γ|X|)<n=1n2/α1lnnV(|X|>cbn)<. (2.4)

    (ii) If CV(|X|2ln12/γ|X|)<, then for any β>1 and c>0,

    k=1β2k/αklnβV(|X|>cbβk)<. (2.5)

    Proof. (i) Because

    CV(|X|2ln12/γ|X|)<1V(|X|2ln12/γ|X|>x)dx<. (2.6)

    Let f(x)=x2ln12/γx,x>1. We define the inverse function of f(x) as f1(x). Then, we can get

    1V(|X|2ln12/γ|X|>x)dx=1V(|X|>f1(x))dx. (2.7)

    Let f1(x)=cby=cy1/αln1/γy, for any c>0, we have

    x=f(cy1/αln1/γy)=cy2/αln2/γyln12/γ(y1/αln1/γy).

    Let

    h(y):=ln2/γyln12/γ(y1/αln1/γy)=c2exp{yeg(u)udu},

    where c2=exp{(12γ)lnα}, g(u)=(2γ1lnu+12γ1αlnu+1γlnlnu(1α+1γlnu)) and obviously g(u)0,u.

    Then, for any c>0, we can get

    x=(cy2/αh(y))=2cαy2/α1h(y)+cy2/αh(y)g(y)ycy2/α1lny.

    Therefore, combining (2.7), for any c>0, we have

    1V(|X|>f1(x))dx=1V(|X|>cby)xdyc1V(|X|>cby)y2/α1lnydy. (2.8)

    Obviously, combining (2.6)–(2.8), we can get

    CV(|X|2ln12/γ|X|)<n=1n2/α1lnnV(|X|>cbn)<,

    hence, the proof of (i) is established.

    (ii) By the proof of (i), we can get (2.4), then for any β>1 and c>0,

    >n=1n2/α1lnnV(|X|>cbn)ck=1βk1n<βkβk(2/α1)klnβV(|X|>cbβk)=ck=1β2k/αklnβV(|X|>cbβk),

    hence, the proof of (ii) is established.

    Lemma 2.3. (Zhang [5]) If ˆE is countably sub-additive, then for XH,

    ˆE(|X|)CV(|X|). (2.9)

    Theorem 3.1. Assume that {X,Xn,n1} is a sequence of negatively dependent and identically distributed random variables under sub-linear expectations. Suppose that {ank,1kn,n1} is an array of positive real numbers and ˆE is countably sub-additive. Set bn=n1/αln1/γn, where 0<α2, 0<γ<2, if

    CV(|X|2ln12/γ|X|)<, (3.1)
    nk=1aαnk=O(n),  (3.2)
    ˆEXk=ˆεXk=0, (3.3)

    then for any ε>0,

    n=1n1V(max1jn|jk=1ankXk|>bnε)<. (3.4)

    Theorem 3.2. Assume that the conditions of Theorem 3.1 are satisfied, then for 0<θ<2 and ε>0,

    n=1n1CV{b1nmax1jn|jk=1ankXk|ε}θ+<, (3.5)

    where + is the positive part.

    Remark 3.1. Theorem 3.2 not only pushes the result of Wu and Wang[18] from probability space to sub-linear expectation space but also extends 1<α2 to 0<α2, 0<γ<α to 0<γ<2, 0<θ<α to 0<θ<2, extending the original range and enhancing the result.

    For fixed n1 and 1kn, denote

    Ynk:=bnI(Xk<bn)+XkI(|Xk|bn)+bnI(Xk>bn),Znk:=XkYnk=(Xk+bn)I(Xk<bn)+(Xkbn)I(Xk>bn).

    We can easily see that for any ε>0,

    {max1jn|jk=1ankXk|>bnε}{1kn,|Xk|>bn}{max1jn|jk=1ankXk|>bnε,1kn,|Xk|bn}{1kn,|Xk|>bn}{max1jn|jk=1ank(YnkˆEYnk)|>bnεmax1jn|jk=1ˆE(ankYnk)|}.

    Then, we have

    n=1n1V(max1jn|jk=1ankXk|>bnε)n=1n1nk=1V(|Xk|>bn)+n=1n1V(max1jn|jk=1ank(YnkˆEYnk)|>bnεmax1jn|jk=1ˆE(ankYnk)|):=I1+I2.

    In order to prove (3.4), we just need to prove

    I1<, (4.1)
    I2<. (4.2)

    First of all, we prove (4.1). We know that in the probability space: EI(|X|a)=P(|X|a) holds, nevertheless under the sub-linear expectation space, the expression I(|x|a) not necessarily continuous. As a result, EI(|X|a) does not necessarily exist. So, we need to modify the indicator function by functions in Cl,Lip(R). We define the function g(x)Cl,Lip(R) as follows.

    For 21/α<μ<1, suppose that even function g(x)Cl,Lip(R) and g(x) is decreasing in x0, such that 0g(x)1 for all x and g(x)=1 if |x|μ,g(x)=0 if |x|>1. Then

    I(|x|μ)g(|x|)I(|x|1),I(|x|>1)1g(|x|)I(|x|>μ). (4.3)

    By (2.4), (3.1) and (4.3), we have

    I1=n=1n1nk=1V(|Xk|>bn)n=1n1nk=1ˆE(1g(|Xk|bn))=n=1ˆE(1g(|X|bn))n=1V(|X|>μbn)<,

    hence, the proof of (4.1) is established.

    Next, we prove (4.2). First of all, we verify that

    b1nmax1jn|jk=1ˆE(ankYnk)|0, as n.

    Noting that

    |Znk|=|Xk+bn|I(Xk<bn)+|Xkbn|I(Xk>bn)|Xk|(1g(|Xk|bn)). (4.4)

    According to (3.3) and ank is non-negative, we can get

    ˆE(ankXk)=ankˆEXk=0. (4.5)

    Combining with (4.3), we have

    |X|(1g(|X|bn))|X|I(|X|>μbn)|X|2ln12/γ|X|μbnln12/γ(μbn). (4.6)

    When 1α2, we contact (2.2), (2.9), (3.1), (3.2), (4.4)–(4.6) and lnbnclnn. It is easy to obtain that

    b1nmax1jn|jk=1ˆE(ankYnk)|b1nnk=1|ˆE(ankYnk)|=b1nnk=1|ˆE(ankXk)ˆE(ankYnk)|b1nnk=1ˆE|ankXkankYnk|b1nnk=1ankˆE|Znk|b1n(nk=1aαnk)1α(nk=11)11αˆE(|X|(1g(|X|bn)))nb1n(μbnln12/γ(μbn))1ˆE(|X|2ln12/γ|X|)cnb2nln12/γbnc1n2/α1lnn0, as n. (4.7)

    When 0<α<1, according to (2.9) and (3.1), we can get ˆE(|X|)CV(|X|2ln12/γ|X|)<. Noting |Ynk||Xk| and γ>0, we have

    b1nmax1jn|jk=1ˆE(ankYnk)|b1nnk=1|ˆE(ankYnk)|=b1nnk=1ankˆE|Ynk|b1nnk=1ankˆE|X|n1/αb1nˆE|X|c1ln1/γn0, as n. (4.8)

    Thus, for ε>0 and all n large enough, we have

    max1jn|jk=1ˆE(ankYnk)|bnε2.

    In order to prove (4.2), it suffices to show

    I3=n=1n1V(max1jn|jk=1ank(YnkˆEYnk)|>bnε2)<.

    Noting that, for p1,

    ˆE|(YnkˆEYnk)|pcpˆE(|Ynk|p+|ˆEYnk|p)2cpˆE|Ynk|p, (4.9)

    where cp is a positive constant depending only on p.

    We know ank is non-negative. By Definition 2.5, for fixed n1, {ank(YnkˆEYnk),1kn} is still negatively dependent sequence of random variables. By (4.9), Markov inequality and (2.3) for q=2, we can get

    I3cn=1n1b2nˆE(max1jn|jk=1ank(YnkˆEYnk)|2)cn=1n1b2n(nk=1ˆE|ank(YnkˆEYnk)|2)+cn=1n1b2n(nk=1[(ˆE[ank(YnkˆEYnk)])++(ˆε[ank(YnkˆEYnk)])])2cn=1n1b2nnk=1ˆE|ankYnk|2+cn=1n1b2n(nk=1[(ˆE[ank(YnkˆEYnk)])++(ˆε[ank(YnkˆEYnk)])])2:=I4+I5.

    By (4.3) and Cr inequality, for any λ>0, we can obtain

    |Ynk|λ|Xk|λI(|Xk|bn)+bλnI(|Xk|>bn)|Xk|λg(μ|Xk|bn)+bλn(1g(|Xk|bn)). (4.10)

    For I4, combining (2.4), (3.1), (3.2) and (4.10), we have

    I4=cn=1n1b2nnk=1ˆE|ankYnk|2=cn=1n1b2nnk=1a2nkˆE|Ynk|2cn=1n2/α1b2n(ˆE|X|2g(μ|X|bn)+b2nˆE(1g(|X|bn)))cn=1n2/α1b2n(ˆE|X|2g(μ|X|bn))+cn=1n2/α1V(|X|>μbn)cn=1n2/α1b2n(ˆE|X|2g(μ|X|bn)).

    Let gj(x)Cl, Lip (R),j1, such that 0gj(x)1 for all x; gj(x/b2j)=1 if b2j1<|x|b2j and gj(x/b2j)=0 if |x|μb2j1 or |x|>(1+μ)b2j. Then for any r>0, we can obtain

    I(b2j1<|X|b2j)gj(|X|b2j)I(μb2j1<|X|(1+μ)b2j), (4.11)
    |X|rg(|X|b2k)1+kj=1|X|rgj(|X|b2j). (4.12)

    It is noted that according to (2.5), (3.1), (4.11), (4.12), 0<γ<2 and g(x) is decreasing in x0. It is easy to prove that.

    I4cn=1n2/α1b2nˆE(|X|2g(μ|X|bn))ck=12k1n<2k2k(2/α1)b22k1ˆE(|X|2g(μ|X|b2k))ck=122k/αb22kkj=1ˆE(|X|2gj(μ|X|b2j))=cj=1ˆE(|X|2gj(μ|X|b2j))k=j22k/αb22kcj=1ˆE(|X|2gj(μ|X|b2j))k=jk2/γcj=1j12/γˆE(|X|2gj(μ|X|b2j))j=1j12/γb22jV(|X|>b2j1)=j=122j/αjV(|X|>cb2j)<. (4.13)

    Next, we estimate I5<. According to (2.1), then we have ˆE(ankYnkˆE(ankYnk))=0. By Definition 2.1, we know ˆE(X)=ˆε(X). Combining (2.2) and Cr inequality, we can obtain

    I5=cn=1n1b2n(nk=1(ˆε[ank(YnkˆEYnk)]))2=cn=1n1b2n(nk=1(ˆE[ank(YnkˆEYnk)]))2cn=1n1b2n(nk=1|ˆE[ankYnk+ˆEankYnk]|)2=cn=1n1b2n(nk=1|ˆE[ankYnk]+ˆE[ankYnk]|)2cn=1n1b2n(nk=1(|ˆE[ankYnk]|+|ˆE[ankYnk]|))2cn=1n1b2n(nk=1ank|ˆE[Ynk]|)2+cn=1n1b2n(nk=1ank|ˆE[Ynk]|)2=cn=1n1b2n(nk=1ank|ˆE[Xk]ˆE[Ynk]|)2+cn=1n1b2n(nk=1ank|ˆE[Xk]ˆE[Ynk]|)2cn=1n1b2n(nk=1ankˆE|Xk(Ynk)|)2+cn=1n1b2n(nk=1ankˆE|XkYnk|)2cn=1n1b2n(nk=1ankˆE|XkYnk|)2.

    Noting 21/α<μ<1, we can get μ>b2k1/b2k. By (4.3), we have

    1g(|X|b2k)I(|X|b2k>μ)I(|X|>b2k1)=j=kI(b2j1<|X|b2j)j=kgj(|X|b2j). (4.14)

    For I5, we contact (2.5), (3.1), (3.2), (4.11), (4.14) and ˆE is countably sub-additive. We can get

    I5cn=1n1b2n(nk=1ankˆE|XkYnk|)2cn=1n1b2nnmax(2,2/α)ˆE2(|X|(1g(|X|bn)))ck=12k1n<2k2kb22k12max(2,2/α)kˆE2(|X|(1g(|X|b2k1)))ck=1b22k12max(2,2/α)k(j=kˆE|X|gj(|X|b2j))2ck=12max(2,2/α)kk2/γ1b32kb2kk12/γj=kb2jV(|X|>cb2j)j=kb2jV(|X|>cb2j)ck=12max(2,2/α)kk2/γ1b32kj=kb22jj12/γV(|X|>cb2j)j=kb2jV(|X|>cb2j)ck=12max(2,2/α)kk2/γ1b32kj=kb2jV(|X|>cb2j)=cj=1b2jV(|X|>cb2j)jk=12max(2,2/α)kk2/γ1b32k=cj=1b2jV(|X|>cb2j)jk=12max(23/α,1/α)kk11/γc{j=1b2jV(|X|>cb2j) if 0<α3/2;j=1b2jV(|X|>cb2j)jk=12k(23/α)k11/γ if 3/2<α2;c{j=1b2jV(|X|>cb2j) if 0<α3/2;j=12j(23/α)j11/γb2jV(|X|>cb2j)  if 3/2<α2;c{j=12j/αj1/γV(|X|>cb2j)< if 0<α3/2.j=122j(11/α)j1V(|X|>cb2j)<  if 3/2<α2.

    For ε>0, we have by Theorem 3.1 that

    n=1n1CV{b1nmax1jn|jk=1ankXk|ε}θ+=n=1n10V(b1nmax1jn|jk=1ankXk|ε>t1/θ)dtn=1n1V(max1jn|jk=1ankXk|>bnε)+n=1n11V(max1jn|jk=1ankXk|>bnt1/θ)dtcn=1n11V(max1jn|jk=1ankXk|>bnt1/θ)dt.

    In order to prove (3.5), for t1, denoted as 1kn, we obtain

    Ynk:=bnt1/θI(Xk<bnt1/θ)+XkI(|Xk|bnt1/θ)+bnt1/θI(Xk>bnt1/θ),Znk:=XkYnk=(Xk+bnt1/θ)I(Xk<bnt1/θ)+(Xkbnt1/θ)I(Xk>bnt1/θ).

    We can easily see that

    n=1n11V(max1jn|jk=1ankXk|>bnt1/θ)dtn=1n1nk=11V(|Xk|>bnt1/θ)dt+n=1n11V(max1jn|jk=1ank(YnkˆEYnk)|>bnt1/θmax1jn|jk=1ˆE(ankYnk)|)dt:=H1+H2.

    In order to prove (3.5), we just need to prove

    H1<, (4.15)
    H2<. (4.16)

    First of all, we prove (4.15). For t1/θ1, since g(x) is decreasing in x0, we have 1g(|X|bnt1/θ)1g(|X|bn). According to (2.5), (3.1), (4.11), (4.14), 0<θ<2 and ˆE is countably sub-additive. We can obtain

    H1n=1n11nk=1ˆE(1g(|Xk|bnt1/θ))dtn=11ˆE|X|2ln12/γ1/2|X|(μbnt1/θ)2ln12/γ1/2(μbnt1/θ)(1g(|X|bnt1/θ))dtcn=11(μbnt1/θ)2ln2/γ1/2(μbnt1/θ)ˆE(|X|2ln1/22/γ|X|)(1g(|X|bn))dt=cn=1ˆE(|X|2ln1/22/γ|X|)(1g(|X|bn))μbny2ln2/γ1/2ybθnμθθyθ1dy(let y=μbnt1/θ)cn=1ˆE(|X|2ln1/22/γ|X|)(1g(|X|bn))bθnbθ2nln2/γ1/2(μbn)ck=12k1n<2kb22k1ln2/γ1/2(μb2k1)ˆE(|X|2ln1/22/γ|X|)(1g(|X|b2k1))ck=12k(12/α)k1/2j=kˆE(|X|2ln1/22/γ|X|)gj(|X|b2j)=cj=1ˆE(|X|2ln1/22/γ|X|)gj(|X|b2j)jk=12k(12/α)k1/2c{j=1b22jj1/22/γV(|X|>cb2j)         if  0<α<2;j=1j1/2b22jj1/22/γV(|X|>cb2j)    if  α=2;c{j=122j/αj1/2V(|X|>cb2j)<   if  0<α<2.j=12jjV(|X|>cb2j)<             if  α=2.

    hence, the proof of (4.15) is established.

    Next, we prove (4.16). Let's prove that

    supt1b1nt1/θmax1jn|jk=1ˆE(ankYnk)|0, as n.

    When 1α2, similar considerations to (4.7), we have

    supt1b1nt1/θmax1jn|jk=1ˆE(ankYnk)|supt1b1nt1/θnk=1|ˆE(ankYnk)|supt1b1nt1/θnk=1|ˆE(ankXk)ˆE(ankYnk)|supt1b1nt1/θnk=1ankˆE|Znk|supt1b1nt1/θnk=1ankˆE(|X|(1g(|X|bnt1/θ)))supt1b1nt1/θnˆE(|X|(1g(|X|bn)))nb1nˆE(|X|(1g(|X|bn)))nb1n(μbnln12/γ(μbn))1ˆE(|X|2ln12/γ|X|)c1n2/α1lnn0, as n.

    When 0<α<1, noting |Ynk||Xk|, similar to (4.8), we can get

    supt1b1nt1/θmax1jn|jk=1ˆE(ankYnk)|supt1b1nt1/θnk=1|ˆE(ankYnk)|supt1b1nt1/θnk=1ankˆE|Ynk|supt1b1nt1/θnk=1ankˆE|X|b1nnk=1ankˆE|X|c1ln1/γn0, as n.

    Hence, for t1/θ1 and all n large enough, we can get

    max1jn|jk=1ˆE(ankYnk)|bnt1/θ2.

    In order to prove (4.14), it suffices to show

    H3=n=1n11V(max1jn|jk=1ank(YnkˆEYnk)|>bnt1/θ2)dt<.

    We know ank is non-negative. By Definition 2.5, for fixed n1, {ank(YnkˆEYnk),1kn} is still a negatively dependent sequence of random variables. By (4.9), Markov inequality and (2.3) for q=2, we can get

    H3=n=1n11V(max1jn|jk=1ank(YnkˆEYnk)|>bnt1/θ2)dtcn=1n11(bnt1/θ)2ˆE(max1jn|jk=1ank(YnkˆEYnk)|2)dtcn=1n11(bnt1/θ)2nk=1ˆE|ank(YnkˆEYnk)|2dt+cn=1n11(bnt1/θ)2(nk=1[(ˆE[ank(YnkˆEYnk)])++(ˆε[ank(YnkˆEYnk)])])2dtcn=1n11(bnt1/θ)2nk=1ˆE|ankYnk|2dt+cn=1n11(bnt1/θ)2(nk=1[(ˆE[ank(YnkˆEYnk)])++(ˆε[ank(YnkˆEYnk)])])2dt:=H4+H5.

    For H4, similar considerations to (4.10), we can get for any λ>0,

    |Ynk|λ|Xk|λg(μ|Xk|bnt1/θ)+(bnt1/θ)λ(1g(|Xk|bnt1/θ)). (4.17)

    Combining (3.2) and (4.17), we have

    H4=n=1n11(bnt1/θ)2nk=1ˆE|ankYnk|2dt=n=1n11(bnt1/θ)2nk=1a2nkˆE|Ynk|2dtn=1n11(bnt1/θ)2nk=1a2nk(ˆE|X|2g(μ|X|bnt1/θ)+(bnt1/θ)2ˆE(1g(|X|bnt1/θ)))dtn=1n2/α1b2n1t2/θˆE|X|2g(μ|X|bnt1/θ)dt+cn=1n2/α11ˆE(1g(|X|bnt1/θ))dt:=H41+H42.

    For H42, combining (2.5), (3.1), (4.11), (4.14), 0<γ<2, 0<θ<2 and ˆE to countable sub-additivity and g(x) is decreasing in x0. We have

    H42=n=1n2/α11ˆE(1g(|X|bnt1/θ))dtn=1n2/α11ˆE|X|2ln12/γ1/2|X|(μbnt1/θ)2ln12/γ1/2(μbnt1/θ)(1g(|X|bnt1/θ))dtn=1n2/α11(μbnt1/θ)2ln2/γ1/2(μbnt1/θ)ˆE(|X|2ln1/22/γ|X|)(1g(|X|bn))dt=n=1n2/α1ˆE(|X|2ln1/22/γ|X|)(1g(|X|bn))μbny2ln2/γ1/2ybθnμθθyθ1dycn=1n2/α1ˆE(|X|2ln1/22/γ|X|)(1g(|X|bn))bθnbθ2nln2/γ1/2(μbn)ck=12k1n<2k2k(2/α1)b22k1ln2/γ1/2(μb2k1)ˆE(|X|2ln1/22/γ|X|)(1g(|X|b2k1))ck=122k/αb22kk2/γ1/2j=kˆE(|X|2ln1/22/γ|X|)gj(|X|b2j)cj=1ˆE(|X|2ln1/22/γ|X|)gj(|X|b2j)jk=1k1/2cj=1j1/2b22jj1/22/γV(|X|>cb2j)=cj=122j/αjV(|X|>cb2j)<. (4.18)

    Next, we prove H41<. Similar to the proof of (4.13) and (4.18), we can get

    H41n=1n2/α1b2n1t2/θ(ˆE|X|2(g(μ|X|bnt1/θ)g(μ|X|bn)))dt+n=1n2/α1b2n1t2/θ(ˆE|X|2g(μ|X|bn))dtn=1n2/α11b2nt2/θln2/γ1/2bnˆE(|X|2ln1/22/γ|X|(1g(μ|X|bn)))dt+cn=1n2/α1b2nˆE(|X|2g(μ|X|bn))<.

    Next, we estimate H5<. Similar to the proof of I5, we have

    H5=cn=1n11(bnt1/θ)2(nk=1(ˆε[ank(YnkˆEYnk)]))2dtcn=1n11(bnt1/θ)2(nk=1|ank|ˆE|XkYnk|)2dtcn=1n1b2n1t2/θ(nk=1|ank|ˆE(|X|(1g(|X|bnt1/θ))))2dtcn=1n1b2n(nk=1|ank|ˆE(|X|(1g(|X|bn))))2<.

    Hence, the proof of Theorem 3.2 is established.

    This paper examines the concepts of complete convergence and complete integral convergence within sub-linear expectation space. The proof methodology employed differs from that utilized in probability space, as V and ˆE are not countably sub-additive in sub-linear expectation space. Additionally, the definition of identical distribution in sub-linear expectation is based on ˆE rather than V.

    Therefore, the use of suitable auxiliary tools is crucial for conducting a thorough investigation in the sub-linear expectation space. This study primarily relies on Zhang's [5] upper expectation inequality, which serves as a useful tool in our proof. Our findings indicate that the convergence integral convergence of maxima is more comprehensive than previous research results. In upcoming research endeavors, we aim to explore more intriguing outcomes.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This paper was supported by the National Natural Science Foundation of China (12061028) and Guangxi Colleges and Universities Key Laboratory of Applied Statistics.

    In this article, all authors disclaim any conflict of interest.



    [1] S. G. Peng, G-expectation, G-Brownian motion and related stochastic calculus of Ito type, In: Stochastic analysis and applications, Berlin, Heidelberg: Springer, 2006,541–567. http://doi.org/10.1007/978-3-540-70847-6_25
    [2] S. Peng, Multi-dimensional G-brownian motion and related stochastic calculus under gexpectation, Stoch. Proc. Appl., 118 (2008), 2223–2253. https://doi.org/10.1016/j.spa.2007.10.015 doi: 10.1016/j.spa.2007.10.015
    [3] S. G. Peng, Survey on normal distributions, central limit theorem, Brownian motion and the related stochastic calculus under sublinear expectations, Sci. China Ser. A-Math., 52 (2009), 1391–1411. https://doi.org/10.1007/s11425-009-0121-8 doi: 10.1007/s11425-009-0121-8
    [4] L. X. Zhang, Strong limit theorems for extended independent random variables and extended negatively dependent random variables under sub-linear expectations, Acta. Math. Sci., 42 (2016), 467–490. https://doi.org/10.1007/s10473-022-0203-z doi: 10.1007/s10473-022-0203-z
    [5] L. X. Zhang, Rosenthal's inequalities for independent and negatively dependent random variables under sub-linear expectations with applications, Sci. China Math., 59 (2016), 751–768. https://doi.org/10.1007/s11425-015-5105-2 doi: 10.1007/s11425-015-5105-2
    [6] L. X. Zhang, Exponential inequalities under the sub-linear expectations with applications to laws of the iterated logarithm, Sci. China Math., 59 (2016), 2503–2526. https://doi.org/10.1007/s11425-016-0079-1 doi: 10.1007/s11425-016-0079-1
    [7] P. L. Hsu, H. Robbins, Complete convergence and the law of large numbers, PNAS, 33 (1947), 25–31. https://doi.org/10.1073/pnas.33.2.25 doi: 10.1073/pnas.33.2.25
    [8] Y. S. Chow, On the rate of moment convergence of sample sums and extremes, Bull. Inst. Math. Acad. Sin., 16 (1988), 177–201.
    [9] D. Qiu, P. Chen, Complete moment convergence for i.i.d.random variables, Stat. Probabil. Lett., 91 (2014), 76–82. https://doi.org/10.1016/j.spl.2014.04.001 doi: 10.1016/j.spl.2014.04.001
    [10] W. Yang, S. Hu, Complete moment convergence of pairwise NQD random variables, Stochastics, 87 (2015), 199–208. http://doi.org/10.1080/17442508.2014.939975 doi: 10.1080/17442508.2014.939975
    [11] M. Song, Q. Zhu, Complete moment convergence of extended negatively dependent random variables, J. Inequal. Appl., 2020 (2020), 150. https://doi.org/10.1186/s13660-020-02416-7 doi: 10.1186/s13660-020-02416-7
    [12] S. Li, Q. Wu, Complete integration convergence for arrays of rowwise extended negatively dependent random variables under the sub-linear expectations, AIMS Mathematics, 6 (2021), 12166–12181. https://doi.org/10.3934/math.2021706 doi: 10.3934/math.2021706
    [13] D. Lu, Y. Meng, Complete and complete integral convergence for arrays of row wise widely negative dependent random variables under the sub-linear expectations, Commun. Stat.-Theory. M., 51 (2022), 2994–3007. https://doi.org/10.1080/03610926.2020.1786585 doi: 10.1080/03610926.2020.1786585
    [14] X. Chen, Q. Wu. Complete convergence and complete integral convergence of partial sums for moving average process under sub-linear expectations, AIMS Mathematics, 7 (2022), 9694–9715. https://doi.org/10.3934/math.2022540 doi: 10.3934/math.2022540
    [15] F. X. Feng, X. Zeng, A complete convergence theorem of the maximum of partial sums under the sub-linear expectations, Filomat, 36 (2022), 5725–5735. https://doi.org/10.2298/FIL2217725F doi: 10.2298/FIL2217725F
    [16] M. Xu, K. Cheng, W. Yu, Complete convergence for weighted sums of negatively dependent random variables under sub-linear expectations, AMIS Mathematics, 7 (2022), 19998–20019. https://doi.org/10.3934/math.20221094 doi: 10.3934/math.20221094
    [17] M. Xu, K. Chng, Note on complete convergence and complete moment convergence for negatively dependent random variables under sub-linear expectations, AMIS Mathematics, 8 (2023), 8504–8521. https://doi.org/10.3934/math.2023428 doi: 10.3934/math.2023428
    [18] Y. Wu, Y. Wang, On the complete moment convergence for weighted sums of weakly dependent random variables, J. Math. Inequal., 15 (2021), 277–291. https://doi.org/10.7153/jmi-2021-15-21 doi: 10.7153/jmi-2021-15-21
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1511) PDF downloads(55) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog