In this paper, we study the complete convergence and the complete integration convergence for weighted sums of m-extended negatively dependent (m-END) random variables under sub-linear expectations space with the condition of ˆE|X|p⩽CV(|X|p)<∞, p>1/α and α>3/2. We obtain the results that can be regarded as the extensions of complete convergence and complete moment convergence under classical probability space. In addition, the Marcinkiewicz-Zygmund type strong law of large numbers for weighted sums of m-END random variables under the sub-linear expectations space is proved.
Citation: He Dong, Xili Tan, Yong Zhang. Complete convergence and complete integration convergence for weighted sums of arrays of rowwise m-END under sub-linear expectations space[J]. AIMS Mathematics, 2023, 8(3): 6705-6724. doi: 10.3934/math.2023340
[1] | Shuyan Li, Qunying Wu . Complete integration convergence for arrays of rowwise extended negatively dependent random variables under the sub-linear expectations. AIMS Mathematics, 2021, 6(11): 12166-12181. doi: 10.3934/math.2021706 |
[2] | Lunyi Liu, Qunying Wu . Complete integral convergence for weighted sums of negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(9): 22319-22337. doi: 10.3934/math.20231138 |
[3] | Chengcheng Jia, Qunying Wu . Complete convergence and complete integral convergence for weighted sums of widely acceptable random variables under the sub-linear expectations. AIMS Mathematics, 2022, 7(5): 8430-8448. doi: 10.3934/math.2022470 |
[4] | Xiaocong Chen, Qunying Wu . Complete convergence and complete integral convergence of partial sums for moving average process under sub-linear expectations. AIMS Mathematics, 2022, 7(6): 9694-9715. doi: 10.3934/math.2022540 |
[5] | Mingzhou Xu, Xuhang Kong . Note on complete convergence and complete moment convergence for negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(4): 8504-8521. doi: 10.3934/math.2023428 |
[6] | Mingzhou Xu, Kun Cheng, Wangke Yu . Complete convergence for weighted sums of negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2022, 7(11): 19998-20019. doi: 10.3934/math.20221094 |
[7] | Mingzhou Xu . Complete convergence and complete moment convergence for maximal weighted sums of extended negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(8): 19442-19460. doi: 10.3934/math.2023992 |
[8] | Lizhen Huang, Qunying Wu . Precise asymptotics for complete integral convergence in the law of the logarithm under the sub-linear expectations. AIMS Mathematics, 2023, 8(4): 8964-8984. doi: 10.3934/math.2023449 |
[9] | Mingzhou Xu . On the complete moment convergence of moving average processes generated by negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2024, 9(2): 3369-3385. doi: 10.3934/math.2024165 |
[10] | Mingzhou Xu . Complete convergence of moving average processes produced by negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(7): 17067-17080. doi: 10.3934/math.2023871 |
In this paper, we study the complete convergence and the complete integration convergence for weighted sums of m-extended negatively dependent (m-END) random variables under sub-linear expectations space with the condition of ˆE|X|p⩽CV(|X|p)<∞, p>1/α and α>3/2. We obtain the results that can be regarded as the extensions of complete convergence and complete moment convergence under classical probability space. In addition, the Marcinkiewicz-Zygmund type strong law of large numbers for weighted sums of m-END random variables under the sub-linear expectations space is proved.
In the era of information modernization, limit theorems are widely used in real-life economics, information, and risk measurement. Limit theory of classical probability space considers that additive probability and additive expectation, which is suitable for the condition of model certainty. But the problems of financial and economic have different degrees of uncertainty. In order to analyze and calculate the problems under uncertainty, Peng [1,2] came up with a new conception of the sub-linear expectations, and constructed the basic structure of the sub-linear expectations. Sub-linear expectations relaxes the additivity of probability and expectation of the classical probability. Hence, the theory of sub-linear expectations is more complex and challenging. Under the sub-linear expectations, Peng [3] established the central limit theorem. Enlightened by Peng's main articles, many researchers try to explore the results of sub-linear expectations. Chen and Gan [4] obtained the limiting behavior of weighted sums of independent and identically distributed sequences. Hu and Zhou [5] mainly demonstrated some multi-dimensional central limit theorems and laws of large numbers. Zhang [6,7,8] gained a series of important inequalities under sub-linear expectations. In addition, Zhang and Lin [9] also studied the Kolmogorov's strong law of large numbers. Lan and Zhang [10] proved the several moment inequalities, including Bernstein's inequalities, Kolmogorov's inequalities and Rademacher's inequalities. Guo and Zhang [11] obtained moderate deviation principle for m-dependent random variables under the sub-linear expectation.
In 1947, the notion of complete convergence was raised by Hsu and Robbins [12] as follows. Let {Xn,n⩾1} be a sequence of independent and identically distributed random variables in a probability space (Ω,F,P) with EX1=0 and EX21<∞, Sn=n∑k=1Xk,
∞∑n=1P(|Sn|>nε)<∞,forallε>0. |
In 1988, Chow [13] established the complete moment convergence. The complete moment convergence is stronger than the complete convergence. In the classical probability space, the complete convergence and the complete moment convergence for different sequences have been relatively mature. For example, Yu et al. [14] proved the complete convergence for weighted sums of arrays of rowwise m-END random variables. Wu et al. [16,17] and Wang et al. [18] did a series of studies about extended negatively dependent (END) random variables. Meng et al. [15] and Ding et al. [19] respectively demonstrated the complete convergence and the complete moment convergence for END random variables and widely orthant dependent (WOD) random variables. Based on the basic framework of sub-linear expectations, researchers extended the theories and properties of classical probability space to the sub-linear expectations. For instance, Feng et al. [20] researched the complete convergence and the complete moment convergence for weighted sums of arrays of rowwise negatively dependent (ND) random variables. Zhong and Wu [21], Jia and Wu [22], Lu and Meng [23], their recent papers had new results about complete convergence and complete integral convergence.
This paper aims to prove the complete convergence and the complete integral convergence for weighted sums of arrays of rowwise m-END under sub-linear expectations space. The rest of the paper is organized as follows. In section 2, we generally recall some basic notations and definitions, related properties under sub-linear expectations and preliminary lemmas that are useful to prove the main theorems. In section 3, the complete convergence, complete integral convergence and Marcinkiewicz-Zygmund type strong law of large numbers under sub-linear expectations space are established. In the last section, the proofs of these theorems are stated.
We use the framework and notions of Peng [1,2]. Let (Ω,F) be a given measurable space and H be a linear space of real functions defined on (Ω,F) such that if X1,X2,⋅⋅⋅,Xn∈H then φ(X1,⋅⋅⋅,Xn)∈H for each φ∈Cl,Lip(Rn), where Cl,Lip(Rn) denotes the linear space of (local Lipschitz) functions φ satisfying
|φ(x)−φ(y)|⩽c(1+|x|m+|y|m)|x−y|,∀x,y∈Rn, |
for some c>0, m∈N depending on φ. H is considered as a space of random variables. In this case we denote X∈H.
Definition 2.1. A sub-linear expectation ˆE on H is a function ˆE:H→ˉR satisfying the following properties: for all X,Y∈H, we have
(a) Monotonicity: if X⩾Y then ˆE[X]⩾ˆE[Y];
(b) Constant preserving: ˆE[c]=c;
(c) Sub-additivity: ˆE[X+Y]⩽ˆE[X]+ˆE[Y];
(d) Positive homogeneity: ˆE[λX]=λˆE[X], λ⩾0.
Here ˉR=[−∞,∞]. The triple (Ω,H,ˆE) is called a sub-linear expectation space. Given a sub-linear expectation ˆE, let us denote the conjugate expectation ˆε of ˆE by
ˆε[X]=−ˆE[−X],∀X∈H. |
From the definition, it is easily shown that for all X,Y∈H
ˆε[X]⩽ˆE[X],ˆE[X+c]=ˆE[X]+c,|ˆE[X−Y]|⩽ˆE|X−Y|,ˆE[X−Y]⩾ˆE[X]−ˆE[Y]. |
Definition 2.2. Let G⊂F, a function V:G→[0,1] is called a capacity if
(1) V(Φ)=0, V(Ω)=1;
(2) V(A)⩽V(B), ∀A⊂B, A,B∈G.
It is called to be sub-additive if A,B∈G, A∪B∈G, V(A∪B)⩽V(A)+V(B).
V(A)=inf{ˆE[ξ]:I(A)⩽ξ,ξ∈H},V(A)=1−V(Ac),∀A∈F, |
where Ac is the complement set of A. It is obvious that V is sub-additive and
V(A)⩽V(A),∀A∈F, |
V(A):=ˆE[IA],V(A):=ˆε[IA],ifIA∈H, |
ˆE[f]⩽V(A)⩽ˆE[g],ˆε[f]⩽V(A)⩽ˆε[g],iff⩽IA⩽g,f,g∈H. |
For all X∈H, p>0 and x>0,
I(|X|>x)⩽|X|pxpI(|X|>x)⩽|X|pxp. |
Definition 2.3. We define the Choquet integrals (CV,CV) by
CV[X]=∫∞0V(X⩾t)dt+∫0−∞[V(X⩾t)−1]dt, |
with V being replaced by V and V respectively.
Definition 2.4. [3] (Identical distribution) Let X1 and X2 be two n-dimensional random vectors defined respectively in the sub-linear expectations spaces (Ω1,H1,ˆE1) and (Ω2,H2,ˆE2). They are called identically distributed, denoted by X1d=X2 if
ˆE1(φ(X1))=ˆE2(φ(X2)),∀φ∈Cl,Lip(R), |
whenever the sub-linear expectations are finite. A sequence {Xn,n⩾1} of random variables is said to be identically distributed if Xid=X1 for each i⩾1.
Definition 2.5. [7] (END) In a sub-linear expectation space (Ω,H,ˆE), random variables {Xn,n⩾1} are called to be upper (resp. lower) extended negatively dependent if there is some dominating constant K⩾1 such that
ˆE(n∏i=1φi(Xi))⩽Kn∏i=1ˆE(φi(Xi)),n⩾1, |
whenever the non-negative functions φi∈Cl,Lip(R),i=1,2,⋅⋅⋅ are all non-decreasing (resp. all non-increasing). They are called END if they are both upper extended negatively dependent and lower extended negatively dependent.
Definition 2.6. (m-END) Let m⩾1 be a fixed positive integer. In a sub-linear expectation space (Ω,H,ˆE), random variables {Xn,n⩾1} is said to be m-END if for any n⩾2 and any i1,i2,⋯,in such that |ik−ij|⩾m for all 1⩽k≠j⩽n, we have that Xi1,Xi2,⋯,Xin are END, i.e.
ˆE(n∏k=1φk(Xik))⩽Kn∏k=1ˆE(φk(Xik)),n⩾1,|ik−ij|⩾m,1⩽k≠j⩽n, |
where K⩾1 is some dominating constant, the non-negative functions φi∈Cl,Lip(R),i=1,2,⋯ are all non-decreasing or non-increasing. An array of random variables {Xni,n⩾1,i⩾1} is called rowwise m-END random variables if for every n⩾1, {Xni,i⩾1} is a sequence of m-END random variables, with a dominating sequence {Kn⩾1}.
It is distinct that if {Xn,n⩾1} is a sequence of m-END random variables and f1(x),f2(x),⋯∈Cl,Lip(R) are all non-decreasing (or non-increasing), then {fn(Xn),n⩾1} is also a sequence of m-END random variables.
In the following, let {Xn,n⩾1} be a sequence of random variables in (Ω,H,ˆE). The symbol C is on behalf of a generic positive constant which may differ from one place to another; I(⋅) denote an indicator function. The following five lemmas are needed in the proofs of our theorems.
Lemma 2.1. [20] (i) Markov inequality: for all X∈H,
V(|X|⩾x)⩽ˆE(|X|p)/xp,∀x>0,p>0. |
(ii) H¨older inequality: for all X,Y∈H and p,q>1 satisfying p−1+q−1 = 1,
ˆE(|XY|)⩽(ˆE(|X|p))1/p(ˆE(|Y|q))1/q. |
(iii) Jensen inequality: for all X∈H and 0<r<s,
(ˆE(|X|r))1/r⩽(ˆE(|X|s))1/s. |
Lemma 2.2. [21] (i) Suppose X∈H,α>0,p>0, for any c>0,
CV(|X|p)<∞⇔∞∑n=1nαp−1V(|X|>cnα)<∞. | (2.1) |
(ii) If CV(|X|p)<∞, then for any θ>1 and c>0,
∞∑k=1θkαpV(|X|>cθkα)<∞. | (2.2) |
Lemma 2.3. [7] (Rosenthal's inequalities) Let {Xn,n⩾1} be a sequence of END random variables in (Ω,H,ˆE) with ˆEXk⩽0. And set Sn=n∑k=1Xk,Bn=n∑k=1ˆEX2k,Mn,p=n∑k=1ˆE|Xk|p. For any p⩾2 and for all x>0, then
V(Sn⩾x)⩽(1+Ke)Bnx2, | (2.3) |
there K is some dominating constant and exists a constant Cp⩾1, such that for all x>0 and 0<δ⩽1,
V(Sn⩾x)⩽Cpδ−2pKMn,pxp+Kexp{−x22Bn(1+δ)}. | (2.4) |
With Lemma 2.3 in hand, we can get the following Rosenthal's inequalities for m-END random variables.
Lemma 2.4. (Rosenthal's inequalities) Let {Xn,n⩾1} be a sequence of m-END random variables in (Ω,H,ˆE) with ˆEXk⩽0. And set Sn=n∑k=1Xk,Bn=n∑k=1ˆEX2k,Mn,p=n∑k=1ˆE|Xk|p. For any p⩾2 and for all x>0, then
V(Sn⩾x)⩽m2(1+Ke)Bnx2, | (2.5) |
there K is some dominating constant and exists a constant Cp⩾1, such that for all x>0 and 0<δ⩽1,
V(Sn⩾x)⩽Cpδ−2pmpKMn,pxp+mKexp{−x22m2Bn(1+δ)}. | (2.6) |
Proof. Let r=[nm], define
X′i={Xi1⩽i⩽n;0i>n. |
Note that S′mr+j=r∑i=0X′mi+j, j=1,2,⋯,m, then
Sn=m∑j=1r∑i=0X′mi+j=m∑j=1S′mr+j, |
for all x>0 and n⩾m,
(Sn⩾x)⊂(S′mr+1⩾xm)∪⋯∪(S′mr+m⩾xm)=m⋃j=1(S′mr+j⩾xm). | (2.7) |
It follows by the definition of m-END random variables that X′j,X′m+j,⋯,X′mr+j are END random variables for each j=1,2,⋯,m. Hence, by (2.3) and (2.7) that for all x>0 and n⩾m, we have
V(Sn⩾x)⩽V(m⋃j=1(S′mr+j⩾xm))⩽m∑j=1V(S′mr+j⩾xm)⩽m∑j=1(1+Ke)r∑i=0ˆE(X′mi+j)2(xm)2=m2(1+Ke)Bnx2, |
which implies (2.5).
By (2.4) and (2.7) that for all x>0,n⩾m and p⩾2, we get
V(Sn⩾x)⩽m∑j=1V(S′mr+j⩾xm)⩽m∑j=1(Cpδ−2pKr∑i=0ˆE|X′mi+j|p(xm)p+Kexp{−x22m2r∑i=0ˆE(X′mi+j)2(1+δ)})⩽Cpδ−2pKmpMn,pxp+mKexp{−x22m2Bn(1+δ)}, |
which implies (2.6).
This finishes the proof of Lemma 2.4.
Lemma 2.5. [7] (Borel-Cantelli Lemma) {An,n⩾1} is a sequence of events in F. Suppose that V is a countably sub-additive capacity. If ∞∑n=1V(An)<∞, then V(An,i.o.)=0, where{An,i.o.}=∞⋂n=1∞⋃i=nAi.
Theorem 3.1. Let {X,Xni,n⩾1,1⩽i⩽n} be an array of rowwise m-END and identically distributed random variables under sub-linear expectations. {ˆE(Xni)=ˆε(Xni)=0 and} {ani,n⩾1,1⩽i⩽n} is an array of real numbers, suppose α>3/2, p>1/α, and q>max{2,p},
n∑i=1|ani|q=O(n), | (3.1) |
and
ˆE|X|p⩽CV(|X|p)<∞, | (3.2) |
then for any ε>0,
∞∑n=1nαp−2V{|n∑i=1aniXni|>εnα}<∞. | (3.3) |
Theorem 3.2. Suppose that the conditions of Theorem 3.1 hold, and 0<r<p, then for any ε>0,
∞∑n=1nαp−2CV{|n∑i=1aniXni|−εnα}r+<∞. | (3.4) |
Theorem 3.3. Suppose that the conditions of Theorem 3.1 hold, and αp=2, then for any ε>0,
n−2/pn∑i=1aniXni→0,a.s.V,n→∞. | (3.5) |
Remark 3.1. Theorems 3.1 and Theorem 3.3 extend the corresponding results of Yu et al. [14] from the classical probability space to sub-linear expectations space.
Remark 3.2. Under sub-linear expectations, the main purpose of our paper is to improve the result of Zhong and Wu [21] from END random variables to arrays of rowwise m-END random variables, and extend the range of p.
Remark 3.3. According to Definition 2.6, we can see that if m=1, then the concept of m-END random variables reduces to END random variables under sub-linear expectations. Hence, the concept of m-END random variables is a natural extension of END random variables, m-END random variables include END random variables and ND random variables. So Theorem 3.1, Theorem 3.2 and Theorem 3.3 also hold for the arrays of END random variables and ND random variables under sub-linear expectations.
Proof of Theorem 3.1. According to
n∑i=1aniXni=n∑i=1a+niXni−n∑i=1a−niXni, |
then for any ε>0,
∞∑n=1V{|n∑i=1aniXni|>εnα}⩽∞∑n=1V{|n∑i=1a+niXni|>εnα2}+∞∑n=1V{|n∑i=1a−niXni|>εnα2}. | (4.1) |
Without loss of generality, we can assume ani⩾0 for all n⩾1 and 1⩽i⩽n, which implies that
∞∑n=1nαp−2V{n∑i=1aniXni>εnα}<∞,∀ε>0. | (4.2) |
Because of considering {−Xni,n⩾1,i⩾1} still satisfies the conditions in Theorem 3.1, we have
∞∑n=1nαp−2V{n∑i=1aniXni<−εnα}<∞,∀ε>0. | (4.3) |
Hence, we can imply (3.3) by (4.2) and (4.3).
In the following, we prove (4.2). For all n⩾1 and 1⩽i⩽n, denote that
X′ni=−nαI(Xni<−nα)+XniI(|Xni|⩽nα)+nαI(Xni>nα),X″ni=Xni−X′ni=(Xni+nα)I(Xni<−nα)+(Xni−nα)I(Xni>nα). | (4.4) |
By Definition 2.6, we know that {X′ni,n⩾1,1⩽i⩽n} and {aniX′ni,n⩾1,1⩽i⩽n} are still arrays of rowwise m-END random variables. For any 0<β⩽q, by H¨older inequality and (3.1), we obtain that
(n∑i=1aβni)⩽(n∑i=1aqni)βq(n∑i=11)1−βq⩽Cn. | (4.5) |
For any ε>0,
{n∑i=1aniXni>εnα}⊂{n⋃i=1(|Xni|>nα)}⋃{n∑i=1aniX′ni>εnα}, |
it is easy to see that
∞∑n=1nαp−2V(n∑i=1aniXni>εnα)⩽∞∑n=1nαp−2V{n⋃i=1(|Xni|>nα)⋃(n∑i=1aniX′ni>εnα)}⩽∞∑n=1nαp−2n∑i=1V(|Xni|>nα)+∞∑n=1nαp−2V(n∑i=1aniX′ni>εnα)≐H1+H2. |
Hence, we need to prove H1<∞ and H2<∞.
For 0<μ<1, let g(x) be a decreasing function when x⩾0 and g(x)∈Cl,Lip(R), 0⩽g(x)⩽1 for all x∈R, g(x)=1, if |x|⩽μ; g(x)=0 if |x|>1. Then
I(|x|⩽μ)⩽g(|x|)⩽I(|x|⩽1),I(|x|>1)⩽1−g(|x|)⩽I(|x|>μ). | (4.6) |
By (4.6) and Lemma 2.2 (2.1),
H1⩽∞∑n=1nαp−2n∑i=1ˆE(1−g(|Xni|nα))=∞∑n=1nαp−1ˆE(1−g(|X|nα))⩽∞∑n=1nαp−1V(|X|>μnα)<∞. |
Next we estimate H2<∞. For any q>0, by cr inequality, (4.4) and (4.6), which implies that
|X′ni|q⩽|Xni|qI(|Xni|⩽nα)+nαqI(|Xni|>nα)⩽|Xni|qg(μ|Xni|nα)+nαq(1−g(|Xni|nα)), |
furthermore,
ˆE|X′ni|q⩽ˆE(|X|qg(μ|X|nα))+nαqˆE(1−g(|X|nα))⩽ˆE(|X|qg(μ|X|nα))+nαqV(|X|>μnα). | (4.7) |
Case A1: 0<p<1.
{By} (4.5), (4.7), Markov inequality and αp>1, we get
n−α|n∑i=1aniˆEX′ni|⩽n−αn∑i=1aniˆE|X′ni|⩽n−αn∑i=1aniˆE(|Xni|g(μ|Xni|nα))+n∑i=1aniˆE(1−g(|Xni|nα))⩽n1−αˆE|X|I(|X|⩽1μnα)+nV(|X|>μnα)⩽Cn1−αpˆE|X|p→0,n→∞. |
Case A2: p⩾1.
{By} (4.5), ˆEXni=0 and αp>1, one can get that
n−α|n∑i=1aniˆEX′ni|⩽n−αn∑i=1aniˆE|Xni−X′ni|=n−αn∑i=1aniˆE|X″ni|⩽n−αn∑i=1aniˆE[(|Xni|−nα)I(|Xni|>nα)]⩽n−αn∑i=1aniˆE[|Xni|(1−g(|Xni|nα))]⩽Cn1−αˆE[|X|(1−g(|X|nα))]⩽Cn1−αpˆE|X|p→0,n→∞. |
It follows that for all n large enough,
n−α|n∑i=1aniˆEX′ni|<ε2, |
which implies that
H2⩽C∞∑n=1nαp−2V{n∑i=1ani(X′ni−ˆEX′ni)>εnα2}≐H3. |
By Definition 2.6, we know that {ani(X′ni−ˆEX′ni),n⩾1,1⩽i⩽n} are still arrays of rowwise m-END random variables, and ˆE(ani(X′ni−ˆEX′ni))=0. In order to prove H2<∞, we need to show H3<∞.
Case B1: p<2.
By cr inequality, Jensen inequality, and (2.5) in Lemma 2.4, combine with (4.5), (4.9), (4.10) and (4.13), we get
H3⩽C∞∑n=1nαp−2(4(1+Ke))m2n∑i=1ˆE(ani(X′ni−ˆEX′ni))2(εnα)2⩽C∞∑n=1nαp−2−2αn∑i=1ˆE(ani(X′ni−ˆEX′ni))2⩽C∞∑n=1nαp−2−2αn∑i=1a2niˆE(X′ni)2⩽C∞∑n=1nαp−1−2α[ˆE(|X|2g(μ|X|nα))+n2αV(|X|>μnα)]⩽C∞∑n=1nαp−1−2αˆE(|X|2g(μ|X|nα))+C∞∑n=1nαp−1V(|X|>μnα)≐H31+H32. |
By (2.1), which implies that H32<∞. Next we prove H31<∞.
For 0<μ<1, let gk(x)∈Cl,Lip(R), k⩾1 such that 0⩽gk(x)⩽1 for all x∈R, and gk(x2kα)=1 if 2(k−1)α<|X|⩽2kα; gk(x2kα)=0 if |x|⩽μ2(k−1)α or |x|>(1+μ)2kα. Then
gk(|X|2kα)⩽I(μ2(k−1)α<|X|⩽(1+μ)2kα),|X|lg(|X|2jα)⩽1+j∑k=1|X|lgk(|X|2kα),∀l>0. | (4.8) |
By (4.8) and g(x) is a decreasing function if x⩾0,
H31⩽C∞∑j=12j+1−1∑n=2jnαp−2α−1ˆE(X2g(μ|X|nα))⩽C∞∑j=12(αp−2α−1)j2jˆE(X2g(μ|X|2α(j+1)))⩽C∞∑j=12α(p−2)jˆE(1+j∑k=1X2gk(μ|X|2(k+1)α))⩽C∞∑j=12α(p−2)j+C∞∑j=12α(p−2)jj∑k=1ˆE(X2gk(μ|X|2α(k+1)))≐H311+H312. | (4.9) |
By p<2, we obtain that H311<∞. For H312, by (4.8) and (2.2) in Lemma 2.2, we get
H312⩽C∞∑k=1∞∑j=k2α(p−2)jˆE(X2gk(μ|X|2α(k+1)))⩽C∞∑k=12αpkV(|X|>2αk)<∞. | (4.10) |
Case B2: p⩾2.
By q>p⩾2 and n⩾m, δ=1 and (2.6) in Lemma 2.4, we have
H3⩽∞∑n=1nαp−2Cpδ−2pmpKn∑i=1ˆE|ani(X′ni−ˆEX′ni)|q(εnα)q+∞∑n=1nαp−2mKexp{−(εnα)28m2n∑i=1ˆE(ani(X′ni−ˆEX′ni))2(1+δ)}⩽C∞∑n=1nαp−2n∑i=1aqniˆE|(X′ni−ˆEX′ni)|q(εnα)q+C∞∑n=1nαp−2exp{−(εnα)216m2n∑i=1a2niˆE(X′ni−ˆEX′ni)2}⩽C∞∑n=1nαp−2−αqn∑i=1aqniˆE|X′ni−ˆEX′ni|q+C∞∑n=1nαp−2exp{−(εnα)216n2n∑i=1a2niˆE(X′ni−ˆEX′ni)2}≐I1+I2. |
Next we establish that I1<∞ and I2<∞. For I1, by ˆE|X|p<∞, cr inequality, Jensen inequality and (4.7), we have that
I1⩽C∞∑n=1nαp−2−αqn∑i=1aqniˆE|X′ni|q⩽C∞∑n=1nαp−2−αqn∑i=1aqni(ˆE|X|qg(|X|nα)+nαqV(|X|>μnα))⩽C∞∑i=1∑2i−1⩽n<2inαp−αq−1ˆE(|X|qg(μ|X|nα))+C∞∑n=1nαp−1V(|X|>μnα)≐I11+I12. | (4.11) |
By (2.1), it is obvious that that I12<∞. We only need to prove I11<∞. By (2.1) and (4.8), it is easy to prove that
I11⩽C∞∑i=12i(αp−αq)ˆE(|X|qg(μ|X|2iα))⩽C∞∑i=12i(αp−αq)+C∞∑i=12i(αp−αq)i∑k=1ˆE(|X|qgk(μ|X|2kα))⩽C∞∑k=1∞∑i=k2i(αp−αq)ˆE(|X|qgk(μ|X|2kα))⩽C∞∑k=12kαpV(|X|>c2kα)<∞. | (4.12) |
For α>3/2, 2α−3>0, which implies that for all n large enough,
ε216n2α−3⩾αplnn. |
By (3.2), we can imply that
I2⩽C∞∑n=1nαp−2exp{ε216n2α−3}⩽C∞∑n=1nαp−2exp{lnn−αp}⩽C∞∑n=1n−2<∞. |
Hence H2<∞. This finishes the proof of Theorem 3.1.
Proof of Theorem 3.2. Without loss of generality, assume ani⩾0 for all n⩾1 and 1⩽i⩽n. For any ε>0, by Theorem 3.1 we have that
∞∑n=1nαp−αr−2CV{|n∑i=1aniXni|−εnα}r+=∞∑n=1nαp−αr−2∫∞0V(|n∑i=1aniXni|−εnα>x1/r)dx=∞∑n=1nαp−αr−2∫nαr0V(|n∑i=1aniXni|−εnα>x1/r)dx+∞∑n=1nαp−αr−2∫∞nαrV(|n∑i=1aniXni|−εnα>x1/r)dx⩽∞∑n=1nαp−2V(|n∑i=1aniXni|>εnα)+∞∑n=1nαp−αr−2∫∞nαrV(|n∑i=1aniXni|>x1/r)dx⩽∞∑n=1nαp−αr−2∫∞nαrV(n∑i=1aniXni>x1/r)dx≐J. |
Hence, it suffices to show that J<∞.
For all n⩾1 and 1⩽i⩽n, denote that
Y′ni=−x1/rI(Xni<−x1/r)+XniI(|Xni|⩽x1/r)+x1/rI(Xni>x1/r),Y″ni=(Xni+x1/r)I(Xni<−x1/r)+(Xni−x1/r)I(Xni>x1/r), |
then
J⩽∞∑n=1nαp−αr−2∫∞nαrn∑i=1V(|Xni|>x1/r)dx+∞∑n=1nαp−αr−2∫∞nαrV(n∑i=1aniY′ni>x1/r)dx⩽∞∑n=1nαp−αr−2∫∞nαrn∑i=1V(|Xni|>x1/r)dx+∞∑n=1nαp−αr−2∫∞nαrV(n∑i=1ani(Y′ni−ˆE(Y′ni))>x1/r−|n∑i=1aniˆE(Y′ni)|)dx≐J1+J2. |
In order to estimate J<∞, we only to show that J1<∞ and J2<∞. Thus by (4.5), (2.1) in Lemma 2.2 and g(x) is a decreasing function when x⩾0, we get
J1⩽∞∑n=1nαp−αr−2∫∞nαrn∑i=1ˆE(1−g(|Xni|x1/r))dx=∞∑n=1nαp−αr−1∫∞nαrˆE(1−g(|X|x1/r))dx=∞∑n=1nαp−αr−1∞∑m=n∫(m+1)αrmαrˆE(1−g(|X|x1/r))dx⩽∞∑n=1nαp−αr−1∞∑m=n[(m+1)αr−mαr]ˆE(1−g(|X|mα))⩽∞∑m=1mαr−1V(|X|>μmα)m∑n=1nαp−αr−1⩽∞∑m=1mαp−1V(|X|>μmα)<∞. |
Next we prove J2<∞. By (4.5) and cr inequality, for all γ>0
ˆE|Y′ni|γ⩽ˆE|X|γg(μ|X|x1/r)+xγ/rˆE(1−g(|X|x1/r))⩽ˆE(|X|γg(μ|X|x1/r))+xγ/rV(|X|>μx1/r). | (4.13) |
Case C1: p⩾1.
By (4.5), ˆEXni=0 and αp>1, it is sufficient to see that
supx⩾nαrx−1/r|n∑i=1aniˆEY′ni|⩽supx⩾nαrx−1/rn∑i=1aniˆE|Xni−Y′ni|⩽supx⩾nαrx−1/rn∑i=1aniˆE|Y″ni|=supx⩾nαrx−1/rn∑i=1aniˆE[(|Xni|−x−1/r)I(|Xni|>x−1/r)]⩽n−αn∑i=1aniˆE[|Xni|I(|Xni|>nα)]⩽n−αn∑i=1aniˆE[|Xni|(1−g(|Xni|nα))]⩽Cn1−αˆE[|X|(1−g(|X|nα))]⩽Cn1−αpˆE|X|p→0,n→∞. |
Case C2: 0<p<1.
{By} (4.5), (4.13), Markov inequality and αp>1, we show that
supx⩾nαrx−1/r|n∑i=1aniˆEY′ni|⩽supx⩾nαrx−1/rn∑i=1aniˆE|Y′ni|⩽supx⩾nαrx−1/rn∑i=1aniˆE(|Xni|g(μ|Xni|x1/r))+supx⩾nαrx−1/rn∑i=1anix−1/rˆE(1−g(|Xni|x1/r))⩽supx⩾nαrx−1/rnˆE|X|I(|X|⩽1μx1/r)+supx⩾nαrnV(|X|>μx1/r)⩽Cn1−αpˆE|X|p+nV(|X|>μnα)⩽Cn1−αpˆE|X|p→0,n→∞. |
Hence, it follows that for all n large enough,
supx⩾nαrx−1/r|n∑i=1aniˆEY′ni|<12, |
which implies that
J2⩽∞∑n=1nαp−αr−2∫∞nαrV(n∑i=1ani(Y′ni−ˆE(Y′ni))>x1/r2)dx≐J3. |
By Definition 2.6, we know that {ani(Y′ni−ˆEY′ni),n⩾1,1⩽i⩽n} are still arrays of rowwise m-END random variables, and ˆE(ani(Y′ni−ˆEY′ni))=0. In order to prove J2<∞, we have to show J3<∞.
Case D1: p<2.
By cr inequality, Jensen inequality, and (2.5) in Lemma 2.4, combine with (4.5), (4.9), (4.10) and (4.13) that
J3⩽∞∑n=1nαp−αr−2(4(1+Ke))m2∫∞nαrn∑i=1ˆE(ani(Y′ni−ˆEY′ni))2x−2/rdx⩽C∞∑n=1nαp−αr−2∫∞nαrx−2/rn∑i=1a2niˆE(Y′ni)2dx⩽C∞∑n=1nαp−αr−1∫∞nαrx−2/rˆE(|X|2g(μ|X|x1/r))dx+∞∑n=1nαp−αr−1∫∞nαrˆE(1−g(μ|X|x1/r))dx⩽C∞∑n=1nαp−αr−1∞∑k=n∫(k+1)αrkαrx−2/rˆE(|X|2g(μ|X|x1/r))dx⩽C∞∑n=1nαp−αr−1∞∑k=nkαr−1−2αˆE(|X|2g(μ|X|kα))⩽C∞∑k=1kαr−1−2αˆE(|X|2g(μ|X|kα))k∑n=1nαp−αr−1⩽C∞∑k=1kαr−1−2αˆE(|X|2g(μ|X|kα))kαp−αr⩽C∞∑k=1kαr−1−2αˆE(|X|2g(μ|X|kα))<∞. |
Case D2: p⩾2.
For q>p⩾2 and n⩾m, by (2.6) in Lemma 2.4, cr inequality and Jensen inequality, let δ=1, we have
J3⩽C∞∑n=1nαp−αr−2∫∞nαrn∑i=1ˆE|ani(Y′ni−ˆEY′ni)|qxq/rdx+C∞∑n=1nαp−αr−2∫∞nαrexp{−x2/r8m2n∑i=1ˆE(ani(Y′ni−ˆEY′ni))2(1+δ)}dx⩽C∞∑n=1nαp−αr−2∫∞nαrx−q/rn∑i=1aqniˆE|Y′ni−ˆEY′ni|qdx+C∞∑n=1nαp−αr−2∫∞nαrexp{−x2/r16m2n∑i=1a2niˆE(Y′ni−ˆEY′ni)2}dx⩽C∞∑n=1nαp−αr−2∫∞nαrx−q/rn∑i=1aqniˆE|Y′ni|qdx+C∞∑n=1nαp−αr−2∫∞nαrexp{−x2/r16m2n∑i=1a2niˆE(Y′ni−ˆEY′ni)2}dx≐J31+J32. |
Next we prove J31<∞ and J32<∞. By cr inequality, Jensen inequality, and (2.5), combine with (4.5), (4.11), (4.12) and (4.13), then
J31⩽C∞∑n=1nαp−αr−1∫∞nαrx−q/r(ˆE(|X|qg(μ|X|x1/r))+xq/rˆE(1−g(μ|X|x1/r)))dx⩽C∞∑n=1nαp−αr−1∫∞nαrx−q/rˆE(|X|qg(μ|X|x1/r))dx+C∞∑n=1nαp−αr−1∫∞nαrˆE(1−g(μ|X|x1/r))dx⩽C∞∑n=1nαp−αr−1∞∑k=n∫(k+1)αrkαrx−q/rˆE(|X|qg(μ|X|x1/r))dx⩽C∞∑n=1nαp−αr−1∞∑k=nkαr−1−αqˆE(|X|qg(μ|X|kα))⩽C∞∑k=1kαr−1−αqˆE(|X|qg(μ|X|kα))k∑n=1nαp−αr−1⩽C∞∑k=1kαr−1−αqˆE(|X|qg(μ|X|kα))kαp−αr⩽C∞∑k=1kαr−1−2αˆE(|X|qg(μ|X|kα))<∞. |
Let β>max{αp−12α−3, r2}, and 2βr>1, 2−αp+(2α−3)β>1, it follows that all s large enough, es>sβ, take x=nαrt, noting that by (3.2),
J32⩽C∞∑n=1nαp−αr−2∫∞nαrexp{−x2/r16n3ˆE(Y′ni−ˆEY′ni)2}dx⩽C∞∑n=1nαp−αr−2∫∞nαrexp{−x2/rn3}dx⩽C∞∑n=1nαp−αr−2nαr∫∞1exp{n2αt2/rn3}dt⩽C∞∑n=1nαp−αr−2nαr∫∞1(n2αt2/rn3)−βdt⩽C∞∑n=1nαp−2−(2α−3)β∫∞11t2β/rdt⩽C∞∑n=11n2−αp+(2α−3)β<∞. |
By ˆE(Xni)=ˆε(Xni)=0, {−Xni,n⩾1,i⩾1} also satisfies the conditions of Theorem 3.2, we obtain
∞∑n=1nαp−αr−2∫∞nαrV(n∑i=1aniXni<−x1/r)dx<∞. |
Hence, the proof of Theorem 3.2 is finished.
Proof of Theorem 3.3. Take αp=2 in Theorem 3.1, we get
∞∑n=1V{|n∑i=1aniXni|>εnα}<∞. |
By Lemma 2.5, then
V{|n∑i=1aniXni|>εnα,i.o.}=0, |
and
V{∞⋃m=1∞⋂n=m(|n∑i=1aniXni|⩽εnα)}=1, |
furthermore,
V{∞⋃m=1∞⋂n=m(|n∑i=1aniXni|⩽εnα)}=1. |
Then
(∞⋃m=1∞⋂n=m(|n∑i=1aniXni|⩽εnα))⊂(n−αn∑i=1aniXni⟶0). |
When α=2/p, we have
V{(n−2/pn∑i=1aniXni)⟶0}=1. |
Above all, the proof of Theorem 3.3 is completed.
This paper was supported by the Department of Science and Technology of Jilin Province (Grant No. YDZJ202101ZYTS156), and Graduate Innovation Project of Beihua University (2021003).
All authors declare no conflict of interest in this paper.
[1] |
S. G. Peng, G-Expectation, G-Brownian motion and related stochastic calculus of Ito's type, Stoch. Anal. Appl., 2 (2006), 541–567. http://dx.doi.org/10.1007/978-3-540-70847-6_25 doi: 10.1007/978-3-540-70847-6_25
![]() |
[2] |
S. G. Peng, Multi-dimensional G-Brownian motion and related stochastic calculus under G-expectation, Stoch. Proc. Appl., 118 (2008), 2223–2253. http://dx.doi.org/10.1016/j.spa.2007.10.015 doi: 10.1016/j.spa.2007.10.015
![]() |
[3] | S. G. Peng, A new central limit theorem under sublinear expectations, arXiv: 0803.2656, 2008. |
[4] |
P. Y. Chen, S. X. Gan, Limiting behavior of weighted sums of i.i.d. random variables, Statist. Probab. Lett., 77 (2007), 1589–1599. http://dx.doi.org/10.1016/j.spl.2007.03.038 doi: 10.1016/j.spl.2007.03.038
![]() |
[5] |
Z. C. Hu, L. Zhou, Multi-dimensional central limit theorems and laws of large numbers under sublinear expectations, Acta Math. Sci. Ser. B (Engl. Ed.), 31 (2015), 305–318. http://dx.doi.org/10.1007/s10114-015-3212-1 doi: 10.1007/s10114-015-3212-1
![]() |
[6] |
L. X. Zhang, Strong limit theorems for extended independent random variables and extended negatively dependent random variables under sub-linear expectations, Acta Math. Sci. Ser. B (Engl. Ed.), 42 (2022), 467–490. http://dx.doi.org/10.1007/sl0473-022-0203-z doi: 10.1007/sl0473-022-0203-z
![]() |
[7] |
L. X. Zhang, Exponential inequalities under the sub-linear expectations with applications to laws of the iterated logarithm, Sci. China-Math., 59 (2016), 2503–2526. http://dx.doi.org/10.1007/s11425-016-0079-1 doi: 10.1007/s11425-016-0079-1
![]() |
[8] |
L. X. Zhang, Rosenthal's inequalities for independent and negatively dependent random variables under sub-linear expectations with applications, Sci. China-Math., 59 (2016), 751–768. http://dx.doi.org/10.1007/S11425-015-5105-2 doi: 10.1007/S11425-015-5105-2
![]() |
[9] |
L. X. Zhang, J. H. Lin, Marcinkiewicz's strong law of large numbers for nonlinear expectations, Statist. Probab. Lett., 137 (2018), 269–276. http://dx.doi.org/10.48550/arXiv.1703.00604 doi: 10.48550/arXiv.1703.00604
![]() |
[10] |
Y. T. Lan, N. Zhang, Severral moment inequalities under sublinear expectations, Acta Math. Appl. Sinica, 41 (2018), 229–248. http://dx.doi.org/10.12387/C2018018 doi: 10.12387/C2018018
![]() |
[11] |
S. Guo, Y. Zhang, Moderate deviation principle for m-dependent random variables under the sub-linear expectation, AIMS Math., 7 (2022), 5943–5956. http://dx.doi.org/10.3934/math.2022331 doi: 10.3934/math.2022331
![]() |
[12] |
P. L. Hsu, H. Robbins, Complete convergence and the law of large numbers, P. Natl. A. Sci. USA, 33 (1947), 25–31. http://dx.doi.org/10.1073/pnas.33.2.25 doi: 10.1073/pnas.33.2.25
![]() |
[13] | Y. S. Chow, On the rate of moment complete convergence of sample sums and extremes, Bull. Inst. Math. Acad. Sinica, 16 (1988), 177–201. |
[14] |
Q. H. Yu, M. M. Ning, M. Pan, A. T. Shen, Complete convergence for weighted sums of arrays of rowwise m-END random variables, J. Hebei Norm. Univ. Nat. Sci. Ed., 40 (2018), 333–338. http://dx.doi.org/10.3969/j.issn.1000-2375.2018.04.003 doi: 10.3969/j.issn.1000-2375.2018.04.003
![]() |
[15] |
B. Meng, D. C. Wang, Q. Y. Wu, Complete convergence and complete moment convergence for weighted sums of extended negatively dependent random variables, Comm. Statist. Theory Methods, 51 (2022), 1–14. https://doi.org/10.1080/03610926.2020.1804587 doi: 10.1080/03610926.2020.1804587
![]() |
[16] | Y. F. Wu, O. C. Munuel, V. Andrei, Complete convergence and complete moment convergence for arrays of rowwise END random variables, Glas. Mat. Ser. III, 51 (2022). http://dx.doi.org/10.3336/gm.49.2.16 |
[17] |
Y. F. Wu, M. Guan, Convergence properties of the partial sums for sequences of END random Variables, J. Korean Math. Soc., 49 (2012), 1097–1110. http://dx.doi.org/10.4134/jkms.2012.49.6.1097 doi: 10.4134/jkms.2012.49.6.1097
![]() |
[18] |
X. J. Wang, X. Q. Li, S. H. Hu, X. H. Wang, On complete convergence for an extended negatively dependent sequence, Comm. Statist. Theory Methods, 43 (2014), 2923–2937. http://dx.doi.org/10.1080/03610926.2012.690489 doi: 10.1080/03610926.2012.690489
![]() |
[19] |
Y. Ding, Y. Wu, S. L. Ma, X. R. Tao, X. J. Wang, Complete convergence and complete moment convergence for widely orthant-dependent random variables, Comm. Statist. Theory Methods, 46 (2017), 8278–8294. http://dx.doi.org/10.1080/03610926.2016.1177085 doi: 10.1080/03610926.2016.1177085
![]() |
[20] | F. X. Feng, D. C. Wang, Q. Y. Wu, H. W. Huang, Complete and complete moment convergence for weighted sums of arrays of rowwise negatively dependent random variables under the sub-linear expectations, Comm. Statist. Theory Methods, 50 (2021), 594–608. https://doi.org/10.1080/03610926.2019.1639747 |
[21] |
H. Y. Zhong, Q. Y. Wu, Complete convergence and complete moment convergence for weighted sums of extended negatively dependent random variables under sub-linear expectation, J. Inequal. Appl., 2017 (2017), 1–14. http://dx.doi.org/10.1186/s13660-017-1538-1 { doi: 10.1186/s13660-017-1538-1
![]() |
[22] |
C. C. Jia, Q. Y. Wu, Complete convergence and complete integral convergence for weighted sums of widely acceptable random variables under the sub-linear expectations, AIMS Math., 7 (2022), 8430–8448. http://dx.doi.org/10.3934/math.2022470 doi: 10.3934/math.2022470
![]() |
[23] |
D. W. Lu, Y. Meng, Complete and complete integral convergence for arrays of row wise widely negative dependent random variables under the sub-linear expectations, Comm. Statist. Theory Methods, 51 (2020), 1–14. http://dx.doi.org/10.1080/03610926.2020.1786585 doi: 10.1080/03610926.2020.1786585
![]() |
1. | TAN Xili, DONG He, SUN Peiyu, ZHANG Yong, Almost Sure Convergence of Weighted Sums for m-END Sequences under Sub-linear Expectations, 2024, 1, 3006-0656, 10.59782/sidr.v1i1.26 | |
2. | Peiyu Sun, Dehui Wang, Xili Tan, Equivalent Conditions of Complete p-th Moment Convergence for Weighted Sum of ND Random Variables under Sublinear Expectation Space, 2023, 11, 2227-7390, 3494, 10.3390/math11163494 | |
3. | Qingfeng Wu, Xili Tan, Shuang Guo, Peiyu Sun, Strong law of large numbers for weighted sums of m-widely acceptable random variables under sub-linear expectation space, 2024, 9, 2473-6988, 29773, 10.3934/math.20241442 |