In this paper, we study the complete convergence and the complete integration convergence for weighted sums of m-extended negatively dependent (m-END) random variables under sub-linear expectations space with the condition of ˆE|X|p⩽CV(|X|p)<∞, p>1/α and α>3/2. We obtain the results that can be regarded as the extensions of complete convergence and complete moment convergence under classical probability space. In addition, the Marcinkiewicz-Zygmund type strong law of large numbers for weighted sums of m-END random variables under the sub-linear expectations space is proved.
Citation: He Dong, Xili Tan, Yong Zhang. Complete convergence and complete integration convergence for weighted sums of arrays of rowwise m-END under sub-linear expectations space[J]. AIMS Mathematics, 2023, 8(3): 6705-6724. doi: 10.3934/math.2023340
[1] | Shuyan Li, Qunying Wu . Complete integration convergence for arrays of rowwise extended negatively dependent random variables under the sub-linear expectations. AIMS Mathematics, 2021, 6(11): 12166-12181. doi: 10.3934/math.2021706 |
[2] | Lunyi Liu, Qunying Wu . Complete integral convergence for weighted sums of negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(9): 22319-22337. doi: 10.3934/math.20231138 |
[3] | Chengcheng Jia, Qunying Wu . Complete convergence and complete integral convergence for weighted sums of widely acceptable random variables under the sub-linear expectations. AIMS Mathematics, 2022, 7(5): 8430-8448. doi: 10.3934/math.2022470 |
[4] | Xiaocong Chen, Qunying Wu . Complete convergence and complete integral convergence of partial sums for moving average process under sub-linear expectations. AIMS Mathematics, 2022, 7(6): 9694-9715. doi: 10.3934/math.2022540 |
[5] | Mingzhou Xu, Xuhang Kong . Note on complete convergence and complete moment convergence for negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(4): 8504-8521. doi: 10.3934/math.2023428 |
[6] | Mingzhou Xu, Kun Cheng, Wangke Yu . Complete convergence for weighted sums of negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2022, 7(11): 19998-20019. doi: 10.3934/math.20221094 |
[7] | Mingzhou Xu . Complete convergence and complete moment convergence for maximal weighted sums of extended negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(8): 19442-19460. doi: 10.3934/math.2023992 |
[8] | Lizhen Huang, Qunying Wu . Precise asymptotics for complete integral convergence in the law of the logarithm under the sub-linear expectations. AIMS Mathematics, 2023, 8(4): 8964-8984. doi: 10.3934/math.2023449 |
[9] | Mingzhou Xu . On the complete moment convergence of moving average processes generated by negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2024, 9(2): 3369-3385. doi: 10.3934/math.2024165 |
[10] | Mingzhou Xu . Complete convergence of moving average processes produced by negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(7): 17067-17080. doi: 10.3934/math.2023871 |
In this paper, we study the complete convergence and the complete integration convergence for weighted sums of m-extended negatively dependent (m-END) random variables under sub-linear expectations space with the condition of ˆE|X|p⩽CV(|X|p)<∞, p>1/α and α>3/2. We obtain the results that can be regarded as the extensions of complete convergence and complete moment convergence under classical probability space. In addition, the Marcinkiewicz-Zygmund type strong law of large numbers for weighted sums of m-END random variables under the sub-linear expectations space is proved.
In the era of information modernization, limit theorems are widely used in real-life economics, information, and risk measurement. Limit theory of classical probability space considers that additive probability and additive expectation, which is suitable for the condition of model certainty. But the problems of financial and economic have different degrees of uncertainty. In order to analyze and calculate the problems under uncertainty, Peng [1,2] came up with a new conception of the sub-linear expectations, and constructed the basic structure of the sub-linear expectations. Sub-linear expectations relaxes the additivity of probability and expectation of the classical probability. Hence, the theory of sub-linear expectations is more complex and challenging. Under the sub-linear expectations, Peng [3] established the central limit theorem. Enlightened by Peng's main articles, many researchers try to explore the results of sub-linear expectations. Chen and Gan [4] obtained the limiting behavior of weighted sums of independent and identically distributed sequences. Hu and Zhou [5] mainly demonstrated some multi-dimensional central limit theorems and laws of large numbers. Zhang [6,7,8] gained a series of important inequalities under sub-linear expectations. In addition, Zhang and Lin [9] also studied the Kolmogorov's strong law of large numbers. Lan and Zhang [10] proved the several moment inequalities, including Bernstein's inequalities, Kolmogorov's inequalities and Rademacher's inequalities. Guo and Zhang [11] obtained moderate deviation principle for m-dependent random variables under the sub-linear expectation.
In 1947, the notion of complete convergence was raised by Hsu and Robbins [12] as follows. Let {Xn,n⩾1} be a sequence of independent and identically distributed random variables in a probability space (Ω,F,P) with EX1=0 and EX21<∞, Sn=n∑k=1Xk,
∞∑n=1P(|Sn|>nε)<∞,forallε>0. |
In 1988, Chow [13] established the complete moment convergence. The complete moment convergence is stronger than the complete convergence. In the classical probability space, the complete convergence and the complete moment convergence for different sequences have been relatively mature. For example, Yu et al. [14] proved the complete convergence for weighted sums of arrays of rowwise m-END random variables. Wu et al. [16,17] and Wang et al. [18] did a series of studies about extended negatively dependent (END) random variables. Meng et al. [15] and Ding et al. [19] respectively demonstrated the complete convergence and the complete moment convergence for END random variables and widely orthant dependent (WOD) random variables. Based on the basic framework of sub-linear expectations, researchers extended the theories and properties of classical probability space to the sub-linear expectations. For instance, Feng et al. [20] researched the complete convergence and the complete moment convergence for weighted sums of arrays of rowwise negatively dependent (ND) random variables. Zhong and Wu [21], Jia and Wu [22], Lu and Meng [23], their recent papers had new results about complete convergence and complete integral convergence.
This paper aims to prove the complete convergence and the complete integral convergence for weighted sums of arrays of rowwise m-END under sub-linear expectations space. The rest of the paper is organized as follows. In section 2, we generally recall some basic notations and definitions, related properties under sub-linear expectations and preliminary lemmas that are useful to prove the main theorems. In section 3, the complete convergence, complete integral convergence and Marcinkiewicz-Zygmund type strong law of large numbers under sub-linear expectations space are established. In the last section, the proofs of these theorems are stated.
We use the framework and notions of Peng [1,2]. Let (Ω,F) be a given measurable space and H be a linear space of real functions defined on (Ω,F) such that if X1,X2,⋅⋅⋅,Xn∈H then φ(X1,⋅⋅⋅,Xn)∈H for each φ∈Cl,Lip(Rn), where Cl,Lip(Rn) denotes the linear space of (local Lipschitz) functions φ satisfying
|φ(x)−φ(y)|⩽c(1+|x|m+|y|m)|x−y|,∀x,y∈Rn, |
for some c>0, m∈N depending on φ. H is considered as a space of random variables. In this case we denote X∈H.
Definition 2.1. A sub-linear expectation ˆE on H is a function ˆE:H→ˉR satisfying the following properties: for all X,Y∈H, we have
(a) Monotonicity: if X⩾Y then ˆE[X]⩾ˆE[Y];
(b) Constant preserving: ˆE[c]=c;
(c) Sub-additivity: ˆE[X+Y]⩽ˆE[X]+ˆE[Y];
(d) Positive homogeneity: ˆE[λX]=λˆE[X], λ⩾0.
Here ˉR=[−∞,∞]. The triple (Ω,H,ˆE) is called a sub-linear expectation space. Given a sub-linear expectation ˆE, let us denote the conjugate expectation ˆε of ˆE by
ˆε[X]=−ˆE[−X],∀X∈H. |
From the definition, it is easily shown that for all X,Y∈H
ˆε[X]⩽ˆE[X],ˆE[X+c]=ˆE[X]+c,|ˆE[X−Y]|⩽ˆE|X−Y|,ˆE[X−Y]⩾ˆE[X]−ˆE[Y]. |
Definition 2.2. Let G⊂F, a function V:G→[0,1] is called a capacity if
(1) V(Φ)=0, V(Ω)=1;
(2) V(A)⩽V(B), ∀A⊂B, A,B∈G.
It is called to be sub-additive if A,B∈G, A∪B∈G, V(A∪B)⩽V(A)+V(B).
V(A)=inf{ˆE[ξ]:I(A)⩽ξ,ξ∈H},V(A)=1−V(Ac),∀A∈F, |
where Ac is the complement set of A. It is obvious that V is sub-additive and
V(A)⩽V(A),∀A∈F, |
V(A):=ˆE[IA],V(A):=ˆε[IA],ifIA∈H, |
ˆE[f]⩽V(A)⩽ˆE[g],ˆε[f]⩽V(A)⩽ˆε[g],iff⩽IA⩽g,f,g∈H. |
For all X∈H, p>0 and x>0,
I(|X|>x)⩽|X|pxpI(|X|>x)⩽|X|pxp. |
Definition 2.3. We define the Choquet integrals (CV,CV) by
CV[X]=∫∞0V(X⩾t)dt+∫0−∞[V(X⩾t)−1]dt, |
with V being replaced by V and V respectively.
Definition 2.4. [3] (Identical distribution) Let X1 and X2 be two n-dimensional random vectors defined respectively in the sub-linear expectations spaces (Ω1,H1,ˆE1) and (Ω2,H2,ˆE2). They are called identically distributed, denoted by X1d=X2 if
ˆE1(φ(X1))=ˆE2(φ(X2)),∀φ∈Cl,Lip(R), |
whenever the sub-linear expectations are finite. A sequence {Xn,n⩾1} of random variables is said to be identically distributed if Xid=X1 for each i⩾1.
Definition 2.5. [7] (END) In a sub-linear expectation space (Ω,H,ˆE), random variables {Xn,n⩾1} are called to be upper (resp. lower) extended negatively dependent if there is some dominating constant K⩾1 such that
ˆE(n∏i=1φi(Xi))⩽Kn∏i=1ˆE(φi(Xi)),n⩾1, |
whenever the non-negative functions φi∈Cl,Lip(R),i=1,2,⋅⋅⋅ are all non-decreasing (resp. all non-increasing). They are called END if they are both upper extended negatively dependent and lower extended negatively dependent.
Definition 2.6. (m-END) Let m⩾1 be a fixed positive integer. In a sub-linear expectation space (Ω,H,ˆE), random variables {Xn,n⩾1} is said to be m-END if for any n⩾2 and any i1,i2,⋯,in such that |ik−ij|⩾m for all 1⩽k≠j⩽n, we have that Xi1,Xi2,⋯,Xin are END, i.e.
ˆE(n∏k=1φk(Xik))⩽Kn∏k=1ˆE(φk(Xik)),n⩾1,|ik−ij|⩾m,1⩽k≠j⩽n, |
where K⩾1 is some dominating constant, the non-negative functions φi∈Cl,Lip(R),i=1,2,⋯ are all non-decreasing or non-increasing. An array of random variables {Xni,n⩾1,i⩾1} is called rowwise m-END random variables if for every n⩾1, {Xni,i⩾1} is a sequence of m-END random variables, with a dominating sequence {Kn⩾1}.
It is distinct that if {Xn,n⩾1} is a sequence of m-END random variables and f1(x),f2(x),⋯∈Cl,Lip(R) are all non-decreasing (or non-increasing), then {fn(Xn),n⩾1} is also a sequence of m-END random variables.
In the following, let {Xn,n⩾1} be a sequence of random variables in (Ω,H,ˆE). The symbol C is on behalf of a generic positive constant which may differ from one place to another; I(⋅) denote an indicator function. The following five lemmas are needed in the proofs of our theorems.
Lemma 2.1. [20] (i) Markov inequality: for all X∈H,
V(|X|⩾x)⩽ˆE(|X|p)/xp,∀x>0,p>0. |
(ii) H¨older inequality: for all X,Y∈H and p,q>1 satisfying p−1+q−1 = 1,
ˆE(|XY|)⩽(ˆE(|X|p))1/p(ˆE(|Y|q))1/q. |
(iii) Jensen inequality: for all X∈H and 0<r<s,
(ˆE(|X|r))1/r⩽(ˆE(|X|s))1/s. |
Lemma 2.2. [21] (i) Suppose X∈H,α>0,p>0, for any c>0,
CV(|X|p)<∞⇔∞∑n=1nαp−1V(|X|>cnα)<∞. | (2.1) |
(ii) If CV(|X|p)<∞, then for any θ>1 and c>0,
∞∑k=1θkαpV(|X|>cθkα)<∞. | (2.2) |
Lemma 2.3. [7] (Rosenthal's inequalities) Let {Xn,n⩾1} be a sequence of END random variables in (Ω,H,ˆE) with ˆEXk⩽0. And set Sn=n∑k=1Xk,Bn=n∑k=1ˆEX2k,Mn,p=n∑k=1ˆE|Xk|p. For any p⩾2 and for all x>0, then
V(Sn⩾x)⩽(1+Ke)Bnx2, | (2.3) |
there K is some dominating constant and exists a constant Cp⩾1, such that for all x>0 and 0<δ⩽1,
V(Sn⩾x)⩽Cpδ−2pKMn,pxp+Kexp{−x22Bn(1+δ)}. | (2.4) |
With Lemma 2.3 in hand, we can get the following Rosenthal's inequalities for m-END random variables.
Lemma 2.4. (Rosenthal's inequalities) Let {Xn,n⩾1} be a sequence of m-END random variables in (Ω,H,ˆE) with ˆEXk⩽0. And set Sn=n∑k=1Xk,Bn=n∑k=1ˆEX2k,Mn,p=n∑k=1ˆE|Xk|p. For any p⩾2 and for all x>0, then
V(Sn⩾x)⩽m2(1+Ke)Bnx2, | (2.5) |
there K is some dominating constant and exists a constant Cp⩾1, such that for all x>0 and 0<δ⩽1,
V(Sn⩾x)⩽Cpδ−2pmpKMn,pxp+mKexp{−x22m2Bn(1+δ)}. | (2.6) |
Proof. Let r=[nm], define
X′i={Xi1⩽i⩽n;0i>n. |
Note that S′mr+j=r∑i=0X′mi+j, j=1,2,⋯,m, then
Sn=m∑j=1r∑i=0X′mi+j=m∑j=1S′mr+j, |
for all x>0 and n⩾m,
(Sn⩾x)⊂(S′mr+1⩾xm)∪⋯∪(S′mr+m⩾xm)=m⋃j=1(S′mr+j⩾xm). | (2.7) |
It follows by the definition of m-END random variables that X′j,X′m+j,⋯,X′mr+j are END random variables for each j=1,2,⋯,m. Hence, by (2.3) and (2.7) that for all x>0 and n⩾m, we have
V(Sn⩾x)⩽V(m⋃j=1(S′mr+j⩾xm))⩽m∑j=1V(S′mr+j⩾xm)⩽m∑j=1(1+Ke)r∑i=0ˆE(X′mi+j)2(xm)2=m2(1+Ke)Bnx2, |
which implies (2.5).
By (2.4) and (2.7) that for all x>0,n⩾m and p⩾2, we get
V(Sn⩾x)⩽m∑j=1V(S′mr+j⩾xm)⩽m∑j=1(Cpδ−2pKr∑i=0ˆE|X′mi+j|p(xm)p+Kexp{−x22m2r∑i=0ˆE(X′mi+j)2(1+δ)})⩽Cpδ−2pKmpMn,pxp+mKexp{−x22m2Bn(1+δ)}, |
which implies (2.6).
This finishes the proof of Lemma 2.4.
Lemma 2.5. [7] (Borel-Cantelli Lemma) {An,n⩾1} is a sequence of events in F. Suppose that V is a countably sub-additive capacity. If ∞∑n=1V(An)<∞, then V(An,i.o.)=0, where{An,i.o.}=∞⋂n=1∞⋃i=nAi.
Theorem 3.1. Let {X,Xni,n⩾1,1⩽i⩽n} be an array of rowwise m-END and identically distributed random variables under sub-linear expectations. {ˆE(Xni)=ˆε(Xni)=0 and} {ani,n⩾1,1⩽i⩽n} is an array of real numbers, suppose α>3/2, p>1/α, and q>max{2,p},
n∑i=1|ani|q=O(n), | (3.1) |
and
ˆE|X|p⩽CV(|X|p)<∞, | (3.2) |
then for any ε>0,
∞∑n=1nαp−2V{|n∑i=1aniXni|>εnα}<∞. | (3.3) |
Theorem 3.2. Suppose that the conditions of Theorem 3.1 hold, and 0<r<p, then for any ε>0,
∞∑n=1nαp−2CV{|n∑i=1aniXni|−εnα}r+<∞. | (3.4) |
Theorem 3.3. Suppose that the conditions of Theorem 3.1 hold, and αp=2, then for any ε>0,
n−2/pn∑i=1aniXni→0,a.s.V,n→∞. | (3.5) |
Remark 3.1. Theorems 3.1 and Theorem 3.3 extend the corresponding results of Yu et al. [14] from the classical probability space to sub-linear expectations space.
Remark 3.2. Under sub-linear expectations, the main purpose of our paper is to improve the result of Zhong and Wu [21] from END random variables to arrays of rowwise m-END random variables, and extend the range of p.
Remark 3.3. According to Definition 2.6, we can see that if m=1, then the concept of m-END random variables reduces to END random variables under sub-linear expectations. Hence, the concept of m-END random variables is a natural extension of END random variables, m-END random variables include END random variables and ND random variables. So Theorem 3.1, Theorem 3.2 and Theorem 3.3 also hold for the arrays of END random variables and ND random variables under sub-linear expectations.
Proof of Theorem 3.1. According to
n∑i=1aniXni=n∑i=1a+niXni−n∑i=1a−niXni, |
then for any ε>0,
∞∑n=1V{|n∑i=1aniXni|>εnα}⩽∞∑n=1V{|n∑i=1a+niXni|>εnα2}+∞∑n=1V{|n∑i=1a−niXni|>εnα2}. | (4.1) |
Without loss of generality, we can assume ani⩾0 for all n⩾1 and 1⩽i⩽n, which implies that
∞∑n=1nαp−2V{n∑i=1aniXni>εnα}<∞,∀ε>0. | (4.2) |
Because of considering {−Xni,n⩾1,i⩾1} still satisfies the conditions in Theorem 3.1, we have
∞∑n=1nαp−2V{n∑i=1aniXni<−εnα}<∞,∀ε>0. | (4.3) |
Hence, we can imply (3.3) by (4.2) and (4.3).
In the following, we prove (4.2). For all n⩾1 and 1⩽i⩽n, denote that
X′ni=−nαI(Xni<−nα)+XniI(|Xni|⩽nα)+nαI(Xni>nα),Xni″ | (4.4) |
By Definition 2.6, we know that \{X_{ni}^{'}, n\geqslant1, 1\leqslant i\leqslant n\} and \{a_{ni}X_{ni}^{'}, n\geqslant1, 1\leqslant i\leqslant n\} are still arrays of rowwise m -END random variables. For any 0 < \beta\leqslant q , by H \rm\ddot{o} lder inequality and (3.1), we obtain that
\begin{align} \left({\sum\limits_{i = 1}^{n}{a_{ni}^{\beta}}}\right)\leqslant \left({\sum\limits_{i = 1}^{n}{a_{ni}^{q}}}\right)^{\frac{\beta}{q}}\left(\sum\limits_{i = 1}^n 1 \right)^{1-\frac{\beta}{q}} \leqslant Cn. \end{align} | (4.5) |
For any \varepsilon > 0 ,
{\left\{\sum\limits_{i = 1}^{n}a_{ni}X_{ni} > \varepsilon n^{\alpha}\right\}\subset\left\{\bigcup\limits_{i = 1}^{n}(|X_{ni}| > n^{\alpha})\right\}\bigcup \left\{\sum\limits_{i = 1}^na_{ni}X_{ni}^{'} > \varepsilon n^{\alpha}\right\}}, |
it is easy to see that
\begin{align} &\sum\limits_{n = 1}^{\infty}n^{\alpha p-2}{\mathbb{V}}\left(\sum\limits_{i = 1}^{n}a_{ni}X_{ni} > \varepsilon n^{\alpha}\right)\\ \leqslant& {\sum\limits_{n = 1}^{\infty}}n^{\alpha p-2}{\mathbb{V}}\left\{\bigcup\limits_{i = 1}^{n}(|X_{ni}| > n^{\alpha})\bigcup \left(\sum\limits_{i = 1}^na_{ni}X_{ni}^{'} > \varepsilon n^{\alpha}\right)\right\}\\ \leqslant & {\sum\limits_{n = 1}^{\infty}}n^{\alpha p-2}{\sum\limits_{i = 1}^{n}}{\mathbb{V}}(|X_{ni}| > n^{\alpha})+{\sum\limits_{n = 1}^{\infty}}n^{\alpha p-2}{\mathbb{V}}\left(\sum\limits_{i = 1}^{n}a_{ni}X_{ni}^{'} > \varepsilon n^{\alpha}\right)\\ \doteq & H_1+H_2. \end{align} |
Hence, we need to prove H_1 < \infty and H_2 < \infty .
For 0 < \mu < 1 , let g(x) be a decreasing function when x\geqslant 0 and g(x)\in C_{l, Lip}(\mathbb{R}) , 0\leqslant g(x)\leqslant 1 for all x\in\mathbb{R} , g(x) = 1 , if |x|\leqslant\mu ; g(x) = 0 if |x| > 1 . Then
\begin{align} I(|x|\leqslant\mu)\leqslant g(|x|)\leqslant I(|x|\leqslant1), \quad I(|x| > 1)\leqslant1-g(|x|)\leqslant I(|x| > \mu). \end{align} | (4.6) |
By (4.6) and Lemma 2.2 (2.1),
\begin{align} H_1\leqslant&{\sum\limits_{n = 1}^{\infty}} n^{\alpha p-2} {\sum\limits_{i = 1}^{n}}\hat{\mathbb{E}}\left(1-g\left(\frac{|X_{ni}|}{n^{\alpha}}\right)\right)\\ = &\sum\limits_{n = 1}^{\infty}{n^{\alpha p-1}}\hat{\mathbb{E}}\left(1-g\left(\frac{|X|}{n^{\alpha}}\right)\right)\\ \leqslant&\sum\limits_{n = 1}^{\infty}n^{\alpha p-1}\mathbb{V}({|X|} > \mu{n^{\alpha}}) < \infty. \end{align} |
Next we estimate H_2 < \infty . For any q > 0 , by c_r inequality, (4.4) and (4.6), which implies that
\begin{align*} |X^{'}_{ni}|^{q}\leqslant& |X_{ni}|^{q}I({|X_{ni}|} \leqslant n^{\alpha}) +n^{\alpha q}I({{|X_{ni}|} > n^{\alpha}})\\ \leqslant &|X_{ni}|^{q}g\left(\frac{\mu{|X_{ni}|}}{n^{\alpha}}\right)+n^{\alpha q}\left(1-g\left(\frac{|X_{ni}|}{n^{\alpha}}\right)\right), \\ \end{align*} |
furthermore,
\begin{align} \mathbb{\hat{E}}|X^{'}_{ni}|^{q}\leqslant& \mathbb{\hat{E}}{\left(|X|^{q}g\left(\frac{{\mu|X|}}{n^{\alpha}}\right)\right)}+n^{\alpha q}\mathbb{\hat{E}}\left(1-g\left(\frac{|X|}{n^{\alpha}}\right)\right)\\ \leqslant &\mathbb{\hat{E}}{\left(|X|^{q}g\left(\frac{{\mu|X|}}{n^{\alpha}}\right)\right)}+n^{\alpha q}\mathbb{V}(|X| > \mu n^{\alpha}). \end{align} | (4.7) |
Case A_1 : 0 < p < 1 .
{By} (4.5), (4.7), Markov inequality and \alpha p > 1 , we get
\begin{align*} n^{-\alpha}{\left|\sum\limits_{i = 1}^{n}a_{ni}\mathbb{\hat{E}}X^{'}_{ni}\right|}\leqslant& n^{-\alpha}\sum\limits_{i = 1}^{n}a_{ni}\mathbb{\hat{E}}|X^{'}_{ni}|\\ \leqslant&n^{-\alpha}\sum\limits_{i = 1}^{n}a_{ni}\mathbb{\hat{E}}{\left(|X_{ni}|g\left(\frac{\mu|X_{ni}|}{n^{\alpha}}\right)\right)} +\sum\limits_{i = 1}^{n}a_{ni}\mathbb{\hat{E}}\left(1-g\left(\frac{|X_{ni}|}{n^{\alpha}}\right)\right)\\ \leqslant&n^{1-\alpha}\mathbb{\hat{E}}|X|I(|X|\leqslant \frac{1}{\mu}n^{\alpha}) +n\mathbb{V}(|X| > \mu n^{\alpha})\\ \leqslant &Cn^{1-\alpha p}\mathbb{\hat{E}}|X|^{p}\rightarrow0, \quad n\rightarrow \infty. \end{align*} |
Case A_2 : p\geqslant 1 .
{By} (4.5), \mathbb{\hat{E}}X_{ni} = 0 and \alpha p > 1 , one can get that
\begin{align*} n^{-\alpha}{\left|\sum\limits_{i = 1}^{n}a_{ni}\mathbb{\hat{E}}X^{'}_{ni}\right|}\leqslant & n^{-\alpha}\sum\limits_{i = 1}^{n}a_{ni}\mathbb{\hat{E}}|X_{ni}-X^{'}_{ni}|\\ = &n^{-\alpha}\sum\limits_{i = 1}^{n}a_{ni}\mathbb{\hat{E}}|X^{''}_{ni}|\\ \leqslant&n^{-\alpha}\sum\limits_{i = 1}^{n}a_{ni}\mathbb{\hat{E}}[(|X_{ni}|-n^{\alpha})I(|X_{ni}| > n^{\alpha})]\\ \leqslant& n^{-\alpha}\sum\limits_{i = 1}^{n}a_{ni}\mathbb{\hat{E}}\left[|X_{ni}|\left(1-g\left(\frac{|X_{ni}|}{n^{\alpha}}\right)\right)\right]\\ \leqslant& Cn^{1-\alpha}\mathbb{\hat{E}}\left[|X|\left(1-g\left(\frac{|X|}{n^{\alpha}}\right)\right)\right]\\ \leqslant& Cn^{1-\alpha p}\mathbb{\hat{E}}|X|^{p}\rightarrow0, \quad n\rightarrow \infty. \end{align*} |
It follows that for all n large enough,
\begin{align*} n^{-\alpha}\left|\sum\limits_{i = 1}^{n}a_{ni}{\hat{\mathbb{E}}}X_{ni}^{'}\right| < \frac{\varepsilon}{2}, \end{align*} |
which implies that
\begin{align} H_2\leqslant C\sum\limits_{n = 1}^{\infty}n^{\alpha p-2}{\mathbb{V}}\left\{\sum\limits_{i = 1}^{n}a_{ni}(X_{ni}^{'}-{\hat{\mathbb{E}}}X_{ni}^{'}) > \frac{\varepsilon n^{\alpha}}{2}\right\}\doteq H_3. \end{align} |
By Definition 2.6, we know that \{a_{ni}(X^{'}_{ni}-\hat{\mathbb{E}}X^{'}_{ni}), n\geqslant1, 1\leqslant i\leqslant n\} are still arrays of rowwise m -END random variables, and \hat{\mathbb{E}}(a_{ni}(X_{ni}^{'}-\hat{\mathbb{E}}X_{ni}^{'})) = 0 . In order to prove H_2 < \infty , we need to show H_3 < \infty .
Case B_1 : p < 2 .
By c_r inequality, Jensen inequality, and (2.5) in Lemma 2.4, combine with (4.5), (4.9), (4.10) and (4.13), we get
\begin{align*} H_3\leqslant&C\sum\limits_{n = 1}^{\infty}n^{\alpha p-2}(4(1+Ke))m^2\frac{\sum\limits_{i = 1}^{n}\hat{\mathbb{E}}(a _{ni}(X_{ni}^{'}-\hat{\mathbb{E}}X_{ni}^{'}))^2}{(\varepsilon n^{\alpha})^2}\\ \leqslant& C \sum\limits_{n = 1}^{\infty}n^{\alpha p-2-2\alpha}{\sum\limits_{i = 1}^{n}}{\hat{\mathbb{E}}}(a_{ni}(X_{ni}^{'}-\hat{\mathbb{E}}{X_{ni}^{'}}))^2\\ \leqslant&C\sum\limits_{n = 1}^{\infty}n^{\alpha p-2-2\alpha}\sum\limits_{i = 1}^{n}a_{ni}^2\hat{\mathbb{E}}(X_{ni}^{'})^2\\ \leqslant&C\sum\limits_{n = 1}^{\infty}n^{\alpha p-1-2\alpha}\left[\hat{\mathbb{E}}\left(|X|^2g\left(\frac{\mu|X|}{n^\alpha}\right)\right)+n^{2\alpha}\mathbb{V}(|X| > \mu n^{\alpha})\right]\\ \leqslant&C\sum\limits_{n = 1}^{\infty}n^{\alpha p-1-2\alpha}\hat{\mathbb{E}}\left(|X|^2g\left(\frac{\mu|X|}{n^\alpha}\right)\right)+C\sum\limits_{n = 1}^{\infty}n^{\alpha p-1}\mathbb{V}(|X| > \mu n^{\alpha})\\ \doteq& H_{31}+H_{32}. \end{align*} |
By (2.1), which implies that H_{32} < \infty . Next we prove H_{31} < \infty .
For 0 < \mu < 1 , let g_{k}(x)\in C_{l, Lip}(\mathbb{R}) , k\geqslant1 such that 0\leqslant g_{k}(x)\leqslant 1 for all x\in \mathbb{R} , and g_k(\frac{x}{2^{k\alpha}}) = 1 if 2^{(k-1)\alpha} < |X|\leqslant2^{k\alpha} ; g_k(\frac{x}{2^{k\alpha}}) = 0 if |x|\leqslant \mu 2^{(k-1)\alpha} or |x| > (1+\mu)2^{k\alpha} . Then
\begin{align} g_k\left(\frac{|X|}{2^{k\alpha}}\right)\leqslant I(\mu2^{(k-1)\alpha} < |X|\leqslant(1+\mu)2^{k\alpha}), \\ |X|^lg\left(\frac{|X|}{2^{j\alpha}}\right)\leqslant 1+\sum\limits_{k = 1}^{j}|X|^lg_k\left(\frac{|X|}{2^{k\alpha}}\right), \forall l > 0. \end{align} | (4.8) |
By (4.8) and g(x) is a decreasing function if x\geqslant 0 ,
\begin{align} H_{31}\leqslant&C\sum\limits_{j = 1}^{\infty}\sum\limits_{n = 2^j}^{2^{j+1}-1}n^{\alpha p-2\alpha-1}\hat{\mathbb{E}}\left(X^2g\left(\frac{\mu |X|}{n^\alpha}\right)\right)\\ \leqslant&C\sum\limits_{j = 1}^{\infty}2^{(\alpha p-2\alpha-1)j}2^j\hat{\mathbb{E}}\left(X^2g\left(\frac{\mu|X|}{2^{\alpha(j+1)}}\right)\right)\\ \leqslant&C\sum\limits_{j = 1}^{\infty}2^{\alpha(p-2)j}\hat{\mathbb{E}}\left(1+\sum\limits_{k = 1}^jX^2g_k\left(\frac{\mu|X|}{2^{(k+1)\alpha}}\right)\right)\\ \leqslant&C\sum\limits_{j = 1}^{\infty}2^{\alpha(p-2)j} +C\sum\limits_{j = 1}^{\infty}2^{\alpha(p-2)j}\sum\limits_{k = 1}^j\hat{\mathbb{E}}\left(X^2g_k\left(\frac{\mu|X|}{2^{\alpha(k+1)}}\right)\right)\\ \doteq&H_{311}+H_{312}. \end{align} | (4.9) |
By p < 2 , we obtain that H_{311} < \infty . For H_{312} , by (4.8) and (2.2) in Lemma 2.2, we get
\begin{align} H_{312}\leqslant&C\sum\limits_{k = 1}^{\infty}\sum\limits_{j = k}^{\infty}2^{\alpha(p-2)j}\hat{\mathbb{E}}\left(X^2g_k\left(\frac{\mu|X|}{2^{\alpha(k+1)}}\right)\right)\\ \leqslant&C\sum\limits_{k = 1}^{\infty}2^{\alpha pk}\mathbb{V}(|X| > 2^{\alpha k}) < \infty. \end{align} | (4.10) |
Case B_2 : p\geqslant2 .
By q > p\geqslant2 and n\geqslant m , \delta = 1 and (2.6) in Lemma 2.4, we have
\begin{align*} H_3\leqslant&\sum\limits_{n = 1}^{\infty}n^{\alpha p-2}C_p\delta^{-2p}m^{p}K\frac{\sum\limits_{i = 1}^{n}\hat{\mathbb{E}}|a_{ni}(X_{ni}^{'}-\hat{\mathbb{E}}X_{ni}^{'})|^q}{(\varepsilon n^{\alpha})^q}\\ &+\sum\limits_{n = 1}^{\infty}n^{\alpha p-2}mK{\rm{exp}}\left\{-\frac{(\varepsilon n^{\alpha})^2}{8m^2\sum\limits_{i = 1}^{n}\hat{\mathbb{E}}(a_{ni}(X_{ni}^{'}-\hat{\mathbb{E}}X_{ni}^{'}))^2(1+\delta)}\right\}\\ \leqslant&C\sum\limits_{n = 1}^{\infty}n^{\alpha p-2}\frac{\sum\limits_{i = 1}^{n}a_{ni}^q\hat{\mathbb{E}}|(X_{ni}^{'}-\hat{\mathbb{E}}X_{ni}^{'})|^q}{(\varepsilon n^{\alpha})^q}\\ &+C\sum\limits_{n = 1}^{\infty}n^{\alpha p-2}{\rm{exp}}\left\{-\frac{(\varepsilon n^{\alpha})^2}{16m^2\sum\limits_{i = 1}^{n}a_{ni}^2\hat{\mathbb{E}}(X_{ni}^{'}-\hat{\mathbb{E}}X_{ni}^{'})^2}\right\}\\ \leqslant&C\sum\limits_{n = 1}^{\infty}n^{\alpha p-2-\alpha q}\sum\limits_{i = 1}^{n}a_{ni}^{q}\hat{\mathbb{E}}|X_{ni}^{'}-\hat{\mathbb{E}}X_{ni}^{'}|^q\\ &+C\sum\limits_{n = 1}^{\infty}n^{\alpha p-2}{\rm{exp}}\left\{-\frac{(\varepsilon n^{\alpha})^2}{16n^2\sum\limits_{i = 1}^na_{ni}^2\hat{\mathbb{E}}(X_{ni}^{'}-{\hat{\mathbb{E}}}X_{ni}^{'})^2}\right\}\\ \doteq&I_{1}+I_{2}. \end{align*} |
Next we establish that I_{1} < \infty and I_{2} < \infty . For I_{1} , by \hat{\mathbb{E}}|X|^p < \infty , c_r inequality, Jensen inequality and (4.7), we have that
\begin{align} I_{1}\leqslant & C{\sum\limits_{n = 1}^{\infty}}n^{\alpha p-2-\alpha q}\sum\limits_{i = 1}^{n}a_{ni}^q{\hat{\mathbb{E}}}|X_{ni}^{'}|^q\\ \leqslant& C{\sum\limits_{n = 1}^{\infty}}n^{\alpha p-2-\alpha q}\sum\limits_{i = 1}^{n}a_{ni}^q\left({\hat{\mathbb{E}}}|X|^qg\left(\frac{|X|}{n^{\alpha}}\right)+n^{\alpha q}\mathbb{V}(|X| > \mu n^{\alpha})\right)\\ \leqslant& C\sum\limits_{i = 1}^{\infty}\sum\limits_{2^{i-1}\leqslant n < 2^i}n^{\alpha p-\alpha q-1}\hat{\mathbb{E}}\left(|X|^qg\left(\frac{\mu|X|}{n^\alpha}\right)\right) +C\sum\limits_{n = 1}^{\infty}n^{\alpha p-1}\mathbb{V}(|X| > \mu n^{\alpha})\\ \doteq& I_{11}+I_{12}. \end{align} | (4.11) |
By (2.1), it is obvious that that I_{12} < \infty . We only need to prove I_{11} < \infty . By (2.1) and (4.8), it is easy to prove that
\begin{align} I_{11}\leqslant&C\sum\limits_{i = 1}^{\infty}2^{i(\alpha p-\alpha q)}\hat{\mathbb{E}}\left(|X|^qg\left(\frac{\mu|X|}{2^{i\alpha}}\right)\right)\\ \leqslant&C\sum\limits_{i = 1}^{\infty}2^{i(\alpha p-\alpha q)}+C\sum\limits_{i = 1}^{\infty}2^{i(\alpha p-\alpha q)}\sum\limits_{k = 1}^{i}\hat{\mathbb{E}}\left(|X|^qg_k\left(\frac{\mu|X|}{2^{k\alpha}}\right)\right)\\ \leqslant&C\sum\limits_{k = 1}^{\infty}\sum\limits_{i = k}^{\infty}2^{i(\alpha p-\alpha q)}\hat{\mathbb{E}}\left(|X|^qg_k\left(\frac{\mu|X|}{2^{k\alpha}}\right)\right)\\ \leqslant &C\sum\limits_{k = 1}^{\infty}2^{k\alpha p}\mathbb{V}(|X| > c2^{k\alpha}) < \infty. \end{align} | (4.12) |
For \alpha > 3/2 , 2\alpha-3 > 0 , which implies that for all n large enough,
\begin{align*} \frac{\varepsilon^2}{16}n^{2\alpha-3}\geqslant\alpha p{\rm{ln}}n. \end{align*} |
By (3.2), we can imply that
\begin{align} I_{2}\leqslant&C\sum\limits_{n = 1}^{\infty}n^{\alpha p-2}{\rm{exp}}\left\{\frac{\varepsilon^2}{16}n^{2\alpha-3}\right\}\\ \leqslant&C\sum\limits_{n = 1}^{\infty}n^{\alpha p-2}{\rm{exp}}\{{\rm{ln}}n^{-\alpha p}\}\\ \leqslant&C\sum\limits_{n = 1}^{\infty}n^{-2} < \infty. \end{align} |
Hence H_2 < \infty . This finishes the proof of Theorem 3.1.
Proof of Theorem 3.2. Without loss of generality, assume a_{ni}\geqslant0 for all n\geqslant1 and 1\leqslant i\leqslant n . For any \varepsilon > 0 , by Theorem 3.1 we have that
\begin{align*} &\sum\limits_{n = 1}^{\infty}n^{\alpha p-\alpha r-2}C_{\mathbb{V}}\left\{\left|\sum\limits_{i = 1}^{n}a_{ni}X_{ni}\right|-\varepsilon n^{\alpha}\right\}_{+}^{r}\\ = &\sum\limits_{n = 1}^{\infty}n^{\alpha p-\alpha r-2}\int_{0}^{\infty}\mathbb{V}\left(\left|\sum\limits_{i = 1}^{n}a_{ni}X_{ni}\right|-\varepsilon n^{\alpha} > x^{1/r}\right)dx\\ = &\sum\limits_{n = 1}^{\infty}n^{\alpha p-\alpha r-2}\int_{0}^{n^{\alpha r}}\mathbb{V}\left(\left|\sum\limits_{i = 1}^{n}a_{ni}X_{ni}\right|-\varepsilon n^{\alpha} > x^{1/r}\right)dx\\ &+\sum\limits_{n = 1}^{\infty}n^{\alpha p-\alpha r-2}\int_{n^{\alpha r}}^{\infty}\mathbb{V}\left(\left|\sum\limits_{i = 1}^{n}a_{ni}X_{ni}\right|-\varepsilon n^{\alpha} > x^{1/r}\right)dx\\ \leqslant &\sum\limits_{n = 1}^{\infty}n^{\alpha p-2}\mathbb{V}\left(\left|\sum\limits_{i = 1}^{n}a_{ni}X_{ni}\right| > \varepsilon n^{\alpha}\right)\\ &+\sum\limits_{n = 1}^{\infty}n^{\alpha p-\alpha r-2}\int_{n^{\alpha r}}^{\infty}\mathbb{V}\left(\left|\sum\limits_{i = 1}^{n}a_{ni}X_{ni}\right| > x^{1/r}\right)dx\\ \leqslant &\sum\limits_{n = 1}^{\infty}n^{\alpha p-\alpha r-2}\int_{n^{\alpha r}}^{\infty}\mathbb{V}\left(\sum\limits_{i = 1}^{n}a_{ni}X_{ni} > x^{1/r}\right)dx \doteq J. \end{align*} |
Hence, it suffices to show that J < \infty .
For all n\geqslant 1 and 1\leqslant i\leqslant n , denote that
\begin{align*} &Y_{ni}^{'} = -x^{1/r}I(X_{ni} < -x^{1/r})+X_{ni}I(|X_{ni}|\leqslant x^{1/r})+{x^{1/r}I(X_{ni} > x^{1/r})}, \\ &Y_{ni}^{''} = (X_{ni}+x^{1/r})I(X_{ni} < -x^{1/r})+(X_{ni}-x^{1/r})I(X_{ni} > x^{1/r}), \end{align*} |
then
\begin{align*} J\leqslant &\sum\limits_{n = 1}^{\infty}n^{\alpha p-\alpha r-2}\int_{n^{\alpha r}}^{\infty}\sum\limits_{i = 1}^{n}\mathbb{V}(|X_{ni}| > x^{1/r})dx+\sum\limits_{n = 1}^{\infty}n^{\alpha p-\alpha r-2}\int_{n^{\alpha r}}^{\infty}\mathbb{V}\left(\sum\limits_{i = 1}^{n}a_{ni}Y_{ni}^{'} > x^{1/r}\right)dx\\ \leqslant &\sum\limits_{n = 1}^{\infty}n^{\alpha p-\alpha r-2}\int_{n^{\alpha r}}^{\infty}\sum\limits_{i = 1}^{n}\mathbb{V}(|X_{ni}| > x^{1/r})dx\\ &+\sum\limits_{n = 1}^{\infty}n^{\alpha p-\alpha r-2}\int_{n^{\alpha r}}^{\infty}\mathbb{V}\left(\sum\limits_{i = 1}^{n}a_{ni}(Y_{ni}^{'}-\mathbb{\hat{E}}(Y_{ni}^{'})) > x^{1/r}- \left|\sum\limits_{i = 1}^{n}a_{ni}\mathbb{\hat{E}}(Y_{ni}^{'})\right|\right)dx\\ \doteq&J_{1}+J_{2}. \end{align*} |
In order to estimate J < \infty , we only to show that J_{1} < \infty and J_{2} < \infty . Thus by (4.5), (2.1) in Lemma 2.2 and g(x) is a decreasing function when x\geqslant 0 , we get
\begin{align*} J_{1}\leqslant &\sum\limits_{n = 1}^{\infty}n^{\alpha p-\alpha r-2}\int_{n^{\alpha r}}^{\infty}\sum\limits_{i = 1}^{n}\mathbb{\hat{E}}\left(1-g\left(\frac{|X_{ni}|}{x^{1/r}}\right)\right)dx\\ = &\sum\limits_{n = 1}^{\infty}n^{\alpha p-\alpha r-1}\int_{n^{\alpha r}}^{\infty}\mathbb{\hat{E}}\left(1-g\left(\frac{|X|}{x^{1/r}}\right)\right)dx\\ = &\sum\limits_{n = 1}^{\infty}n^{\alpha p-\alpha r-1}\sum\limits_{m = n}^{\infty}\int_{m^{\alpha r}}^{(m+1)^{\alpha r}}\mathbb{\hat{E}}\left(1-g\left(\frac{|X|}{x^{1/r}}\right)\right)dx\\ \leqslant& \sum\limits_{n = 1}^{\infty}n^{\alpha p-\alpha r-1}\sum\limits_{m = n}^{\infty}[(m+1)^{\alpha r}-m^{\alpha r}]\mathbb{\hat{E}}\left(1-g\left(\frac{|X|}{m^{\alpha}}\right)\right)\\ \leqslant&\sum\limits_{m = 1}^{\infty}m^{\alpha r-1}\mathbb{V}(|X| > \mu m^{\alpha})\sum\limits_{n = 1}^{m}n^{\alpha p-\alpha r-1}\\ \leqslant& \sum\limits_{m = 1}^{\infty}m^{\alpha p-1}\mathbb{V}(|X| > \mu m^{\alpha}) < \infty. \end{align*} |
Next we prove J_{2} < \infty . By (4.5) and c_{r} inequality, for all \gamma > 0
\begin{align} \mathbb{\hat{E}}|Y_{ni}^{'}|^{\gamma}\leqslant& \mathbb{\hat{E}}|X|^{\gamma}g\left(\frac{\mu|X|}{x^{1/r}}\right)+x^{\gamma/r}\mathbb{\hat{E}}\left(1-g\left(\frac{|X|}{x^{1/r}}\right)\right)\\ \leqslant& \mathbb{\hat{E}}{\left(|X|^{\gamma}g\left(\frac{\mu|X|}{x^{1/r}}\right)\right)}+x^{\gamma/r}\mathbb{V}(|X| > \mu x^{1/r}). \end{align} | (4.13) |
Case C_1 : p\geqslant 1 .
By (4.5), \mathbb{\hat{E}}X_{ni} = 0 and \alpha p > 1 , it is sufficient to see that
\begin{align*} \sup\limits_{x\geqslant n^{\alpha r}}x^{-1/r}\left|\sum\limits_{i = 1}^{n}a_{ni}\mathbb{\hat{E}}Y^{'}_{ni}\right|\leqslant & \sup\limits_{x\geqslant n^{\alpha r}}x^{-1/r}\sum\limits_{i = 1}^{n}a_{ni}\mathbb{\hat{E}}|X_{ni}-Y^{'}_{ni}|\\ \leqslant&\sup\limits_{x\geqslant n^{\alpha r}}x^{-1/r}\sum\limits_{i = 1}^{n}a_{ni}\mathbb{\hat{E}}|Y^{''}_{ni}|\\ = &\sup\limits_{x\geqslant n^{\alpha r}}x^{-1/r}\sum\limits_{i = 1}^{n}a_{ni}\mathbb{\hat{E}}[(|X_{ni}|-x^{-1/r})I(|X_{ni}| > x^{-1/r})]\\ \leqslant&n^{-\alpha}\sum\limits_{i = 1}^{n}a_{ni}\mathbb{\hat{E}}[|X_{ni}|I(|X_{ni}| > n^{\alpha})]\\ \leqslant& n^{-\alpha}\sum\limits_{i = 1}^{n}a_{ni}\mathbb{\hat{E}}\left[|X_{ni}|\left(1-g\left(\frac{|X_{ni}|}{n^{\alpha}}\right)\right)\right]\\ \leqslant& Cn^{1-\alpha}\mathbb{\hat{E}}\left[|X|\left(1-g\left(\frac{|X|}{n^{\alpha}}\right)\right)\right]\\ \leqslant& Cn^{1-\alpha p}\mathbb{\hat{E}}|X|^{p}\rightarrow0, \quad n\rightarrow \infty. \end{align*} |
Case C_2 : 0 < p < 1 .
{By} (4.5), (4.13), Markov inequality and \alpha p > 1 , we show that
\begin{align*} \sup\limits_{x\geqslant n^{\alpha r}}x^{-1/r}\left|\sum\limits_{i = 1}^{n}a_{ni}\mathbb{\hat{E}}Y^{'}_{ni}\right|\leqslant& \sup\limits_{x\geqslant n^{\alpha r}}x^{-1/r} \sum\limits_{i = 1}^{n}a_{ni}\mathbb{\hat{E}}|Y^{'}_{ni}|\\ \leqslant&\sup\limits_{x\geqslant n^{\alpha r}}x^{-1/r}\sum\limits_{i = 1}^{n}a_{ni}\mathbb{\hat{E}}{\left(|X_{ni}|g\left(\frac{\mu|X_{ni}|}{x^{1/r}}\right)\right)}\\ &+\sup\limits_{x\geqslant n^{\alpha r}}x^{-1/r}\sum\limits_{i = 1}^{n}a_{ni}x^{-1/r}\mathbb{\hat{E}}\left(1-g\left(\frac{|X_{ni}|}{x^{1/r}}\right)\right)\\ \leqslant&\sup\limits_{x\geqslant n^{\alpha r}}x^{-1/r}n\mathbb{\hat{E}}|X|I(|X|\leqslant \frac{1}{\mu}x^{1/r}) +\sup\limits_{x\geqslant n^{\alpha r}}n\mathbb{V}(|X| > \mu x^{1/r})\\ \leqslant&Cn^{1-\alpha p}\mathbb{\hat{E}}|X|^{p}+n\mathbb{V}(|X| > \mu n^{\alpha})\\ \leqslant &Cn^{1-\alpha p}\mathbb{\hat{E}}|X|^{p}\rightarrow0, \quad n\rightarrow \infty. \end{align*} |
Hence, it follows that for all n large enough,
\begin{align*} \sup\limits_{x\geqslant n^{\alpha r}}x^{-1/r}\left|\sum\limits_{i = 1}^{n}a_{ni}{\hat{\mathbb{E}}}Y_{ni}^{'}\right| < \frac{1}{2}, \end{align*} |
which implies that
\begin{align} J_2\leqslant \sum\limits_{n = 1}^{\infty}n^{\alpha p-\alpha r-2}\int_{n^{\alpha r}}^{\infty} \mathbb{V}\left(\sum\limits_{i = 1}^{n}a_{ni}(Y_{ni}^{'}-\hat{\mathbb{E}}(Y_{ni}^{'})) > \frac{x^{1/r}}{2}\right)dx\doteq J_3. \end{align} |
By Definition 2.6, we know that \{a_{ni}(Y^{'}_{ni}-\hat{\mathbb{E}}Y^{'}_{ni}), n\geqslant1, 1\leqslant i\leqslant n\} are still arrays of rowwise m -END random variables, and \hat{\mathbb{E}}(a_{ni}(Y_{ni}^{'}-\hat{\mathbb{E}}Y_{ni}^{'})) = 0 . In order to prove J_2 < \infty , we have to show J_3 < \infty .
Case D_1 : p < 2 .
By c_r inequality, Jensen inequality, and (2.5) in Lemma 2.4, combine with (4.5), (4.9), (4.10) and (4.13) that
\begin{align} J_3\leqslant&\sum\limits_{n = 1}^{\infty}n^{\alpha p-\alpha r-2}(4(1+Ke))m^2\int_{n^{\alpha r}}^{\infty}\frac{\sum\limits_{i = 1}^{n}\hat{\mathbb{E}}(a _{ni}(Y_{ni}^{'}-\hat{\mathbb{E}}Y_{ni}^{'}))^2}{x^{-2/r}}dx\\ \leqslant& C \sum\limits_{n = 1}^{\infty}n^{\alpha p-\alpha r-2}\int_{n^{\alpha r}}^{\infty}x^{-2/r}{\sum\limits_{i = 1}^{n}}a_{ni}^{2}{\hat{\mathbb{E}}}(Y_{ni}^{'})^2dx\\ \leqslant&C \sum\limits_{n = 1}^{\infty}n^{\alpha p-\alpha r-1}\int_{n^{\alpha r}}^{\infty}x^{-2/r}\hat{\mathbb{E}}\left(|X|^2g\left(\frac{\mu|X|}{x^{1/r}}\right)\right)dx\\ &+\sum\limits_{n = 1}^{\infty}n^{\alpha p-\alpha r-1}\int_{n^{\alpha r}}^{\infty}\hat{\mathbb{E}}\left(1-g\left(\frac{\mu|X|}{x^{1/r}}\right)\right)dx\\ \leqslant&C \sum\limits_{n = 1}^{\infty}n^{\alpha p-\alpha r-1}\sum\limits_{k = n}^{\infty}\int_{k^{\alpha r}}^{(k+1)^{\alpha r}}x^{-2/r}\hat{\mathbb{E}}\left(|X|^2g\left(\frac{\mu|X|}{x^{1/r}}\right)\right)dx\\ \leqslant&C\sum\limits_{n = 1}^{\infty}n^{\alpha p-\alpha r-1}\sum\limits_{k = n}^{\infty}k^{\alpha r-1-2\alpha}\hat{\mathbb{E}}\left(|X|^2g\left(\frac{\mu|X|}{k^\alpha}\right)\right)\\ \leqslant&C\sum\limits_{k = 1}^{\infty}k^{\alpha r-1-2\alpha}\hat{\mathbb{E}}\left(|X|^2g\left(\frac{\mu|X|}{k^\alpha}\right)\right)\sum\limits_{n = 1}^{k}n^{\alpha p-\alpha r-1}\\ \leqslant&C\sum\limits_{k = 1}^{\infty}k^{\alpha r-1-2\alpha}\hat{\mathbb{E}}\left(|X|^2g\left(\frac{\mu|X|}{k^\alpha}\right)\right)k^{\alpha p-\alpha r}\\ \leqslant&C\sum\limits_{k = 1}^{\infty}k^{\alpha r-1-2\alpha}\hat{\mathbb{E}}\left(|X|^2g\left(\frac{\mu|X|}{k^\alpha}\right)\right) < \infty. \end{align} |
Case D_2 : p\geqslant2 .
For q > p\geqslant2 and n\geqslant m , by (2.6) in Lemma 2.4, c_r inequality and Jensen inequality, let \delta = 1 , we have
\begin{align*} J_3\leqslant&C\sum\limits_{n = 1}^{\infty}n^{\alpha p-\alpha r-2}\int_{n^{\alpha r}}^{\infty}\frac{\sum\limits_{i = 1}^{n}\hat{\mathbb{E}}|a_{ni}(Y_{ni}^{'}-\hat{\mathbb{E}}Y_{ni}^{'})|^q}{x^{q/r}}dx\\ &+C\sum\limits_{n = 1}^{\infty}n^{\alpha p-\alpha r-2}\int_{n^{\alpha r}}^{\infty}{\rm{exp}}\left\{-\frac{x^{2/r}}{8m^2\sum\limits_{i = 1}^{n}\hat{\mathbb{E}}(a_{ni}(Y_{ni}^{'}-\hat{\mathbb{E}}Y_{ni}^{'}))^2(1+\delta)}\right\}dx\\ \leqslant&C\sum\limits_{n = 1}^{\infty}n^{\alpha p-\alpha r-2}\int_{n^{\alpha r}}^{\infty}x^{-q/r}{\sum\limits_{i = 1}^{n}a_{ni}^q\hat{\mathbb{E}}|Y_{ni}^{'}-\hat{\mathbb{E}}Y_{ni}^{'}|^q}dx\\ &+C\sum\limits_{n = 1}^{\infty}n^{\alpha p-\alpha r-2}\int_{n^{\alpha r}}^{\infty}{\rm{exp}}\left\{-\frac{x^{2/r}}{16m^2\sum\limits_{i = 1}^{n}a_{ni}^2\hat{\mathbb{E}}(Y_{ni}^{'}-\hat{\mathbb{E}}Y_{ni}^{'})^2}\right\}dx\\ \leqslant&C\sum\limits_{n = 1}^{\infty}n^{\alpha p-\alpha r-2}\int_{n^{\alpha r}}^{\infty}x^{-q/r}\sum\limits_{i = 1}^{n}a_{ni}^{q}\hat{\mathbb{E}}|Y_{ni}^{'}|^{q}dx\\ &+C\sum\limits_{n = 1}^{\infty}n^{\alpha p-\alpha r-2}\int_{n^{\alpha r}}^{\infty}{\rm{exp}}\left\{-\frac{x^{2/r}}{16m^2\sum\limits_{i = 1}^{n}a_{ni}^2\hat{\mathbb{E}}(Y_{ni}^{'}-\hat{\mathbb{E}}Y_{ni}^{'})^2}\right\}dx\\ \doteq&J_{31}+J_{32}. \end{align*} |
Next we prove J_{31} < \infty and J_{32} < \infty . By c_r inequality, Jensen inequality, and (2.5), combine with (4.5), (4.11), (4.12) and (4.13), then
\begin{align} J_{31}\leqslant&C \sum\limits_{n = 1}^{\infty}n^{\alpha p-\alpha r-1}\int_{n^{\alpha r}}^{\infty}x^{-q/r}\left(\hat{\mathbb{E}}\left(|X|^qg\left(\frac{\mu|X|}{x^{1/r}}\right)\right) +x^{q/r}\hat{\mathbb{E}}\left(1-g\left(\frac{\mu|X|}{x^{1/r}}\right)\right)\right)dx\\ \leqslant&C \sum\limits_{n = 1}^{\infty}n^{\alpha p-\alpha r-1}\int_{n^{\alpha r}}^{\infty}x^{-q/r}\hat{\mathbb{E}}\left(|X|^qg\left(\frac{\mu|X|}{x^{1/r}}\right)\right)dx\\ &+C\sum\limits_{n = 1}^{\infty}n^{\alpha p-\alpha r-1}\int_{n^{\alpha r}}^{\infty}\hat{\mathbb{E}}\left(1-g\left(\frac{\mu|X|}{x^{1/r}}\right)\right)dx\\ \leqslant&C \sum\limits_{n = 1}^{\infty}n^{\alpha p-\alpha r-1}\sum\limits_{k = n}^{\infty}\int_{k^{\alpha r}}^{(k+1)^{\alpha r}}x^{-q/r}\hat{\mathbb{E}}\left(|X|^qg\left(\frac{\mu|X|}{x^{1/r}}\right)\right)dx\\ \leqslant&C\sum\limits_{n = 1}^{\infty}n^{\alpha p-\alpha r-1}\sum\limits_{k = n}^{\infty}k^{\alpha r-1-\alpha q}\hat{\mathbb{E}}\left(|X|^qg\left(\frac{\mu|X|}{k^\alpha}\right)\right)\\ \leqslant&C\sum\limits_{k = 1}^{\infty}k^{\alpha r-1-\alpha q}\hat{\mathbb{E}}\left(|X|^qg\left(\frac{\mu|X|}{k^\alpha}\right)\right)\sum\limits_{n = 1}^{k}n^{\alpha p-\alpha r-1}\\ \leqslant&C\sum\limits_{k = 1}^{\infty}k^{\alpha r-1-\alpha q}\hat{\mathbb{E}}\left(|X|^qg\left(\frac{\mu|X|}{k^\alpha}\right)\right)k^{\alpha p-\alpha r}\\ \leqslant&C\sum\limits_{k = 1}^{\infty}k^{\alpha r-1-2\alpha}\hat{\mathbb{E}}\left(|X|^qg\left(\frac{\mu|X|}{k^\alpha}\right)\right) < \infty. \end{align} |
Let \beta > \max\{\frac{\alpha p-1}{2\alpha-3} , \frac{r}{2}\} , and \frac{2\beta}{r} > 1 , 2-\alpha p+(2\alpha-3)\beta > 1 , it follows that all s large enough, {\rm{e}}^s > s^{\beta} , take x = n^{\alpha r}t , noting that by (3.2),
\begin{align*} \nonumber J_{32}\leqslant& C\sum\limits_{n = 1}^{\infty}n^{\alpha p-\alpha r-2}\int_{n^{\alpha r}}^{\infty}{\rm{exp}}\left\{-\frac{x^{2/r}}{16n^3\hat{\mathbb{E}}(Y_{ni}^{'}-\hat{\mathbb{E}}Y_{ni}^{'})^2}\right\}dx\\ \leqslant& C\sum\limits_{n = 1}^{\infty}n^{\alpha p-\alpha r-2}\int_{n^{\alpha r}}^{\infty}{\rm{exp}}\left\{-\frac{x^{2/r}}{n^3}\right\}dx\\ \leqslant&C\sum\limits_{n = 1}^{\infty}n^{\alpha p-\alpha r-2}n^{\alpha r}\int_{1}^{\infty}{\rm{exp}}\left\{\frac{n^{2\alpha}t^{2/r}}{n^3}\right\}dt\\ \leqslant&C\sum\limits_{n = 1}^{\infty}n^{\alpha p-\alpha r-2}n^{\alpha r}\int_{1}^{\infty}\left({\frac{n^{2\alpha}t^{2/r}}{n^3}}\right)^{-\beta}dt\\ \leqslant&C\sum\limits_{n = 1}^{\infty}n^{\alpha p-2-(2\alpha-3)\beta}\int_{1}^{\infty}\frac{1}{t^{2\beta/r}}dt\\ \leqslant&C\sum\limits_{n = 1}^{\infty}\frac{1}{n^{2-\alpha p+(2\alpha-3)\beta}} < \infty. \end{align*} |
By \hat{\mathbb{E}}(X_{ni}) = {\hat{\varepsilon}}(X_{ni}) = 0 , \{-X_{ni}, n\geqslant 1, i\geqslant 1\} also satisfies the conditions of Theorem 3.2, we obtain
\begin{align*} \sum\limits_{n = 1}^{\infty}n^{\alpha p-\alpha r-2}\int_{n^{\alpha r}}^{\infty}\mathbb{V}\left(\sum\limits_{i = 1}^{n}a_{ni}X_{ni} < -x^{1/r}\right)dx < \infty. \end{align*} |
Hence, the proof of Theorem 3.2 is finished.
Proof of Theorem 3.3. Take \alpha p = 2 in Theorem 3.1, we get
\begin{align*} \sum\limits_{n = 1}^{\infty}\mathbb{V}\left\{\left|\sum\limits_{i = 1}^{n}a_{ni}X_{ni}\right| > \varepsilon n^{\alpha}\right\} < \infty. \end{align*} |
By Lemma 2.5, then
\begin{align*} \mathbb{V}\left\{\left|\sum\limits_{i = 1}^{n}a_{ni}X_{ni}\right| > \varepsilon n^{\alpha}, i.o.\right\} = 0, \end{align*} |
and
\begin{align*} \mathcal{V}\left\{\bigcup\limits_{m = 1}^{\infty}\bigcap\limits_{n = m}^{\infty}\left(\left|\sum\limits_{i = 1}^{n}a_{ni}X_{ni}\right|\leqslant \varepsilon n^{\alpha}\right)\right\} = 1, \end{align*} |
furthermore,
\begin{align*} \mathbb{V}\left\{\bigcup\limits_{m = 1}^{\infty}\bigcap\limits_{n = m}^{\infty}\left(\left|\sum\limits_{i = 1}^{n}a_{ni}X_{ni}\right|\leqslant \varepsilon n^{\alpha}\right)\right\} = 1. \end{align*} |
Then
\begin{align*} \left(\bigcup\limits_{m = 1}^{\infty}\bigcap\limits_{n = m}^{\infty}\left(\left|\sum\limits_{i = 1}^{n}a_{ni}X_{ni}\right|\leqslant \varepsilon n^{\alpha}\right)\right)\subset\left(n^{-\alpha}\sum\limits_{i = 1}^na_{ni}X_{ni}\longrightarrow 0\right). \end{align*} |
When \alpha = 2/p , we have
\begin{align*} \mathbb{V}\left\{\left(n^{-2/p}\sum\limits_{i = 1}^na_{ni}X_{ni}\right)\longrightarrow 0\right\} = 1. \end{align*} |
Above all, the proof of Theorem 3.3 is completed.
This paper was supported by the Department of Science and Technology of Jilin Province (Grant No. YDZJ202101ZYTS156), and Graduate Innovation Project of Beihua University (2021003).
All authors declare no conflict of interest in this paper.
[1] |
S. G. Peng, G-Expectation, G-Brownian motion and related stochastic calculus of Ito's type, Stoch. Anal. Appl., 2 (2006), 541–567. http://dx.doi.org/10.1007/978-3-540-70847-6_25 doi: 10.1007/978-3-540-70847-6_25
![]() |
[2] |
S. G. Peng, Multi-dimensional G-Brownian motion and related stochastic calculus under G-expectation, Stoch. Proc. Appl., 118 (2008), 2223–2253. http://dx.doi.org/10.1016/j.spa.2007.10.015 doi: 10.1016/j.spa.2007.10.015
![]() |
[3] | S. G. Peng, A new central limit theorem under sublinear expectations, arXiv: 0803.2656, 2008. |
[4] |
P. Y. Chen, S. X. Gan, Limiting behavior of weighted sums of i.i.d. random variables, Statist. Probab. Lett., 77 (2007), 1589–1599. http://dx.doi.org/10.1016/j.spl.2007.03.038 doi: 10.1016/j.spl.2007.03.038
![]() |
[5] |
Z. C. Hu, L. Zhou, Multi-dimensional central limit theorems and laws of large numbers under sublinear expectations, Acta Math. Sci. Ser. B (Engl. Ed.), 31 (2015), 305–318. http://dx.doi.org/10.1007/s10114-015-3212-1 doi: 10.1007/s10114-015-3212-1
![]() |
[6] |
L. X. Zhang, Strong limit theorems for extended independent random variables and extended negatively dependent random variables under sub-linear expectations, Acta Math. Sci. Ser. B (Engl. Ed.), 42 (2022), 467–490. http://dx.doi.org/10.1007/sl0473-022-0203-z doi: 10.1007/sl0473-022-0203-z
![]() |
[7] |
L. X. Zhang, Exponential inequalities under the sub-linear expectations with applications to laws of the iterated logarithm, Sci. China-Math., 59 (2016), 2503–2526. http://dx.doi.org/10.1007/s11425-016-0079-1 doi: 10.1007/s11425-016-0079-1
![]() |
[8] |
L. X. Zhang, Rosenthal's inequalities for independent and negatively dependent random variables under sub-linear expectations with applications, Sci. China-Math., 59 (2016), 751–768. http://dx.doi.org/10.1007/S11425-015-5105-2 doi: 10.1007/S11425-015-5105-2
![]() |
[9] |
L. X. Zhang, J. H. Lin, Marcinkiewicz's strong law of large numbers for nonlinear expectations, Statist. Probab. Lett., 137 (2018), 269–276. http://dx.doi.org/10.48550/arXiv.1703.00604 doi: 10.48550/arXiv.1703.00604
![]() |
[10] |
Y. T. Lan, N. Zhang, Severral moment inequalities under sublinear expectations, Acta Math. Appl. Sinica, 41 (2018), 229–248. http://dx.doi.org/10.12387/C2018018 doi: 10.12387/C2018018
![]() |
[11] |
S. Guo, Y. Zhang, Moderate deviation principle for m-dependent random variables under the sub-linear expectation, AIMS Math., 7 (2022), 5943–5956. http://dx.doi.org/10.3934/math.2022331 doi: 10.3934/math.2022331
![]() |
[12] |
P. L. Hsu, H. Robbins, Complete convergence and the law of large numbers, P. Natl. A. Sci. USA, 33 (1947), 25–31. http://dx.doi.org/10.1073/pnas.33.2.25 doi: 10.1073/pnas.33.2.25
![]() |
[13] | Y. S. Chow, On the rate of moment complete convergence of sample sums and extremes, Bull. Inst. Math. Acad. Sinica, 16 (1988), 177–201. |
[14] |
Q. H. Yu, M. M. Ning, M. Pan, A. T. Shen, Complete convergence for weighted sums of arrays of rowwise m-END random variables, J. Hebei Norm. Univ. Nat. Sci. Ed., 40 (2018), 333–338. http://dx.doi.org/10.3969/j.issn.1000-2375.2018.04.003 doi: 10.3969/j.issn.1000-2375.2018.04.003
![]() |
[15] |
B. Meng, D. C. Wang, Q. Y. Wu, Complete convergence and complete moment convergence for weighted sums of extended negatively dependent random variables, Comm. Statist. Theory Methods, 51 (2022), 1–14. https://doi.org/10.1080/03610926.2020.1804587 doi: 10.1080/03610926.2020.1804587
![]() |
[16] | Y. F. Wu, O. C. Munuel, V. Andrei, Complete convergence and complete moment convergence for arrays of rowwise END random variables, Glas. Mat. Ser. III, 51 (2022). http://dx.doi.org/10.3336/gm.49.2.16 |
[17] |
Y. F. Wu, M. Guan, Convergence properties of the partial sums for sequences of END random Variables, J. Korean Math. Soc., 49 (2012), 1097–1110. http://dx.doi.org/10.4134/jkms.2012.49.6.1097 doi: 10.4134/jkms.2012.49.6.1097
![]() |
[18] |
X. J. Wang, X. Q. Li, S. H. Hu, X. H. Wang, On complete convergence for an extended negatively dependent sequence, Comm. Statist. Theory Methods, 43 (2014), 2923–2937. http://dx.doi.org/10.1080/03610926.2012.690489 doi: 10.1080/03610926.2012.690489
![]() |
[19] |
Y. Ding, Y. Wu, S. L. Ma, X. R. Tao, X. J. Wang, Complete convergence and complete moment convergence for widely orthant-dependent random variables, Comm. Statist. Theory Methods, 46 (2017), 8278–8294. http://dx.doi.org/10.1080/03610926.2016.1177085 doi: 10.1080/03610926.2016.1177085
![]() |
[20] | F. X. Feng, D. C. Wang, Q. Y. Wu, H. W. Huang, Complete and complete moment convergence for weighted sums of arrays of rowwise negatively dependent random variables under the sub-linear expectations, Comm. Statist. Theory Methods, 50 (2021), 594–608. https://doi.org/10.1080/03610926.2019.1639747 |
[21] |
H. Y. Zhong, Q. Y. Wu, Complete convergence and complete moment convergence for weighted sums of extended negatively dependent random variables under sub-linear expectation, J. Inequal. Appl., 2017 (2017), 1–14. http://dx.doi.org/10.1186/s13660-017-1538-1 { doi: 10.1186/s13660-017-1538-1
![]() |
[22] |
C. C. Jia, Q. Y. Wu, Complete convergence and complete integral convergence for weighted sums of widely acceptable random variables under the sub-linear expectations, AIMS Math., 7 (2022), 8430–8448. http://dx.doi.org/10.3934/math.2022470 doi: 10.3934/math.2022470
![]() |
[23] |
D. W. Lu, Y. Meng, Complete and complete integral convergence for arrays of row wise widely negative dependent random variables under the sub-linear expectations, Comm. Statist. Theory Methods, 51 (2020), 1–14. http://dx.doi.org/10.1080/03610926.2020.1786585 doi: 10.1080/03610926.2020.1786585
![]() |
1. | TAN Xili, DONG He, SUN Peiyu, ZHANG Yong, Almost Sure Convergence of Weighted Sums for m-END Sequences under Sub-linear Expectations, 2024, 1, 3006-0656, 10.59782/sidr.v1i1.26 | |
2. | Peiyu Sun, Dehui Wang, Xili Tan, Equivalent Conditions of Complete p-th Moment Convergence for Weighted Sum of ND Random Variables under Sublinear Expectation Space, 2023, 11, 2227-7390, 3494, 10.3390/math11163494 | |
3. | Qingfeng Wu, Xili Tan, Shuang Guo, Peiyu Sun, Strong law of large numbers for weighted sums of m -widely acceptable random variables under sub-linear expectation space, 2024, 9, 2473-6988, 29773, 10.3934/math.20241442 |