Since the concept of sub-linear expectation space was put forward, it has well supplemented the deficiency of the theoretical part of probability space. In this paper, we establish the complete convergence and complete integration convergence for weighted sums of widely acceptable (abbreviated as WA) random variables under the sub-linear expectations with the different conditions. We extend the complete moment convergence in probability space to sublinear expectation space.
Citation: Chengcheng Jia, Qunying Wu. Complete convergence and complete integral convergence for weighted sums of widely acceptable random variables under the sub-linear expectations[J]. AIMS Mathematics, 2022, 7(5): 8430-8448. doi: 10.3934/math.2022470
[1] | Lunyi Liu, Qunying Wu . Complete integral convergence for weighted sums of negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(9): 22319-22337. doi: 10.3934/math.20231138 |
[2] | Shuyan Li, Qunying Wu . Complete integration convergence for arrays of rowwise extended negatively dependent random variables under the sub-linear expectations. AIMS Mathematics, 2021, 6(11): 12166-12181. doi: 10.3934/math.2021706 |
[3] | Mingzhou Xu, Xuhang Kong . Note on complete convergence and complete moment convergence for negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(4): 8504-8521. doi: 10.3934/math.2023428 |
[4] | He Dong, Xili Tan, Yong Zhang . Complete convergence and complete integration convergence for weighted sums of arrays of rowwise $ m $-END under sub-linear expectations space. AIMS Mathematics, 2023, 8(3): 6705-6724. doi: 10.3934/math.2023340 |
[5] | Xiaocong Chen, Qunying Wu . Complete convergence and complete integral convergence of partial sums for moving average process under sub-linear expectations. AIMS Mathematics, 2022, 7(6): 9694-9715. doi: 10.3934/math.2022540 |
[6] | Mingzhou Xu . Complete convergence and complete moment convergence for maximal weighted sums of extended negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(8): 19442-19460. doi: 10.3934/math.2023992 |
[7] | Mingzhou Xu, Kun Cheng, Wangke Yu . Complete convergence for weighted sums of negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2022, 7(11): 19998-20019. doi: 10.3934/math.20221094 |
[8] | Mingzhou Xu . Complete convergence of moving average processes produced by negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(7): 17067-17080. doi: 10.3934/math.2023871 |
[9] | Mingzhou Xu . On the complete moment convergence of moving average processes generated by negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2024, 9(2): 3369-3385. doi: 10.3934/math.2024165 |
[10] | Lizhen Huang, Qunying Wu . Precise asymptotics for complete integral convergence in the law of the logarithm under the sub-linear expectations. AIMS Mathematics, 2023, 8(4): 8964-8984. doi: 10.3934/math.2023449 |
Since the concept of sub-linear expectation space was put forward, it has well supplemented the deficiency of the theoretical part of probability space. In this paper, we establish the complete convergence and complete integration convergence for weighted sums of widely acceptable (abbreviated as WA) random variables under the sub-linear expectations with the different conditions. We extend the complete moment convergence in probability space to sublinear expectation space.
The classical limit theorem considers additive probability and additive expectation, which is suitable for the case of model determination, but this assumption of additivity is not feasible in many fields of practice. As a mathematical theory, nonlinear expectation can be analyzed and calculated under the uncertainty of the mathematical model. In its research, sub-linear expectation plays a special role and is the most studied. Peng [1,2,3] put forward the concept of generalization of sublinear expectation space in 2006, which transforms the probability in probability space into the capacity in sublinear expectation space, which enriches the theoretical part of probability space. Then, after Zhang's [4,5,6,7] research in sublinear expectation space, some important inequalities are obtained. These inequalities are a powerful tool for us to study sublinear expectation space. In addition, Zhang also studies the law of iterated logarithm and the strong law of large numbers in sublinear expectation space. After further extension, Wu and Jiang [8] obtained the Marcinkiewicz type strong law of numbers and the Chover type iterated logarithm law for more general cases in sublinear expectation space.
In probability space, the complete convergence and complete moment convergence are two very important research parts. The notion of complete convergence was proposed by Hsu and Robbins [9] in 1947. In 1988, Chow [10] introduced the concept of complete moment convergence. The complete moment convergence is stronger than the complete convergence. The complete convergence and complete moment convergence in probability space have been relatively mature. For example, Qiu [11], Wu [12], and Shen [13] respectively obtained the complete convergence and complete moment convergence for independent identically distributed (i.i.d.), negatively associated (NA), extended negatively dependent (END) random variables sequence in probability space. Due to many methods and tools in probability space, sublinear expectation space can not be used, which increases the difficulty of studying sublinear expectation space, but many scholars have done the research, such as Wu [14] pushed the theorem in Wu [12] from probability space to sublinear expectation space. Feng [15] and Liang [16] obtained the complete convergence and complete integral convergence for arrays of row-wise ND and END random variables respectively. Zhong [17] studied the complete convergence and complete integral convergence for the weighted sum of END random variables. Lu [18] obtained more extensive conditions and conclusions than Zhong [17] in sublinear expectation space. The exponential inequality used in this article was proposed by Anna [19] in 2020. In the inequality, it is assumed that the truncated random variable sequence is a WA random variable sequence. Because it was proposed later, there is little research on WA random variable sequence in sublinear expectation space. Hu [20] proved the complete convergence for weighted sums of WA random variables in 2021.
The organizational structure of this paper is as follows. In Section 2, we summarize some basic symbols and concepts, as well as the related properties in sublinear expectation space, and give a preliminary lemma which is helpful to obtain the main results. In Section 3, We deduce [21] from probability space to sublinear expectation space, obtain the corresponding conclusions, and prove the complete convergence and complete integral convergence for the weighted sums of WA random variables in sublinear expectation space.
We use the framework and notations of Peng [1,2,3]. Let (Ω,F) be a given measurable space and let H be a linear space of real functions defined on (Ω,F) such that if X1,X2,⋯,Xn∈H then φ(X1,X2,⋯,Xn)∈H for each φ∈Cl,Lip(Rn), where Cl,Lip(Rn) denotes the linear space of (local Lipschitz) functions φ satisfying
|φ(x)−φ(y)|⩽c(1+|x|m+|y|m)|x−y|, ∀x,y∈Rn, |
for some c>0, m∈N depending on φ. H is considered as a space of random variables. In this case, we denote X∈H.
Definition 2.1. A sub-linear expectation ˆE, on H is a function ˆE:H→[−∞,+∞] satisfying the following properties: For all X,Y∈H, we have
(a) Monotonicity: If X⩾Y, then ˆEX⩾ˆEY;
(b) Constant preserving: ˆE(c)=c, for c∈R;
(c) Sub-additivity: ˆE(X + Y)⩽ˆEX+ˆEY whenever ˆEX+ˆEY is not of the form + ∞−∞ or −∞+∞;
(d) Positive homogeneity: ˆE(λX)=λˆEX,λ⩾0. Convention: when λ = 0 and ˆEX = + ∞, ˆE(λX)=λˆEX = 0.
The triple (Ω,H,ˆE) is called a sub-linear expectation space.
Given a sub-linear expectation ˆE, let us denote the conjugate expectation ˆε of ˆE by
ˆεX:=−ˆE(−X), ∀X∈H. |
From the definition, it is easily shown that for all X,Y∈H,
ˆεX⩽ˆEX,ˆE(X+c)=ˆEX+c,|ˆE(X−Y)|⩽ˆE|X−Y| and ˆE(X−Y)⩾ˆEX−ˆEY. |
If ˆEY=ˆεY, then ˆE(X+aY)=ˆEX+aˆEY for any a∈R. Next, we consider the capacities corresponding to the sub-linear expectations. Let G⊂F. A function V:G→[0,1] is called a capacity if
V(∅)=0,V(Ω)=1; and V(A)⩽V(B) for ∀A⊆B,A,B∈G. |
It is called sub-additive if V(A∪B)⩽V(A)+V(B) for all A,B∈G with A∪B∈G In the sub-linear space (Ω,H,ˆE), we denote a pair (V,V) of capacities by
V(A):=inf{ˆEξ:I(A)⩽ξ, ξ∈H},V(A):=1−V(Ac), ∀A∈F, |
where Ac is the complement set of A. By definition of V and V, it is obvious that V is sub-additive, and
V(A)⩽V(A), ∀A∈F. |
If f⩽I(A)⩽g, f,g∈H, then
ˆEf≤V(A)≤ˆEg,ˆεf≤V(A)≤ˆεg. | (2.1) |
This implies Markov inequality: ∀X∈H,
V(|X|⩾x)⩽ˆE(|X|p)/ˆE(|X|p)xpxp,∀x>0,p>0. |
From I(|X|>x)⩽|X|p/|X|pxpxp∈H. From Lemma 4.1 in Zhang [5], we have Hӧlder inequality: ∀X,Y∈H,p,q>1, satisfying p−1+q−1=1,
ˆE|XY|⩽(ˆE|X|p)1/p(ˆE|Y|q)1/q, |
whenever
ˆE(|X|p)<∞, ˆE(|Y|q)<∞. |
Particularly, Jensen inequality:
(ˆE|X|r)1/1rr⩽(ˆE|X|s)1/1ss,for 0<r⩽s. |
We define the Choquet integrals (CV,CV) by
CV(X)=∫∞0V(X⩾x)dx+∫0−∞[V(X⩾x)−1]dx, |
with V being replaced by V and V, respectively.
Definition 2.2.
(i) ˆE countably sub-additive: ˆE is called to be countably sub-additive if it satisfies ˆE(X)⩽∞∑n=1ˆE(Xn), where X⩽∞∑n=1Xn, X,Xn∈H,X⩾0,Xn⩾0,n⩾1.
(ii) V is called to be countably sub-additive if
V(∞⋃n=1An)⩽∞∑n=1V(An), ∀An∈F. |
Definition 2.3. (Identical distribution) Let X1 and X2 be two random variables defined severally in sub-linear expectation space (Ω1,H1,ˆE1) and (Ω2,H2,ˆE2). They are called identically distributed if
ˆE1(φ(X1))=ˆE2(φ(X2)), ∀φ∈Cl,Lip(R), |
whenever the sub-linear expectations are finite. A sequence {Xn;n⩾1} of random variables is said to be identical distribution if Xi and X1 are identical distribution for each i⩾1.
Definition 2.4. (WA) Let {Yn;n⩾1} be a sequence of random variables in a sub-linear expectation space (Ω,H,ˆE). The sequence {Yn;n⩾1} is called WA if for t⩾0 and for all n∈N
ˆEexp(n∑i=1tYi)≤g(n)n∏i=1ˆEexp(tYi), | (2.2) |
where 0<g(n)<∞.
Definition 2.5. [22] A function L:(0,∞)→(0,∞) is:
(i) A slowly varying function (at infinity), if for any a>0
limx→∞L(ax)L(x)=1, |
(ii) A regularly varying function with index α>0, if for any a>0
limx→∞L(ax)L(x)=aα. |
Lemma 2.6. [22] Every regularly varying function (with index α>0) l:(0,∞)→(0,∞) is of the form
l(x)=xαL(x), |
where L is a slowly varying function.
In the following, let {Xn;n⩾1} be a sequence of random variables in (Ω,H,ˆE). The symbol c stands for a generic positive constant which may differ from one place to another. Let ax∼bx denote limx→∞ax/axbxbx=1. an≪bn denote that there exists a constant c>0 such that an⩽cbn for sufficiently large n, and I(⋅) denotes an indicator function. a∨b means to take the maximum value of a and b, while a∧b means to take the minimum value of a and b.
To prove our results, we need the following lemmas.
In [17], we can get the following lemma.
Lemma 2.7. [17] Suppose X∈H,α>0,p>0, and l(x) is a slow varying function.
(i) Then, for ∀c>0,
CV(|X|pl(|X|1/α))<∞ ⟺ ∞∑n=1nαp−1l(n)V(|X|>cnα)<∞, |
taking l(x)=1 and logx, respectively, we can get that for ∀c>0,
CV(|X|p)<∞ ⇔ ∞∑n=1nαp−1V(|X|>cnα)<∞, |
CV(|X|plog|X|)<∞ ⟺ ∞∑n=1nαp−1lognV(|X|>cnα)<∞. |
(ii) If CV(|X|pl(|X|1/1αα))<∞, then for any θ>1 and c>0,
∞∑k=1θkαpl(θk)V(|X|>cθkα)<∞, |
taking l(x)=1 and logx, respectively, we have
CV(|X|p)<∞ ⇒ ∞∑k=1θkαpV(|X|>cθkα)<∞, |
CV(|X|plog|X|)<∞ ⇒ ∞∑k=1θkαp(logθk)V(|X|>cθkα)<∞. |
The last one is the exponential inequality for WA random variables, which was can be found in [19].
Lemma 2.8. [19] Let {X1,X2,⋯,Xn} be a sequence of random variables in (Ω,H,ˆE), with ˆEXi⩽0 for 1⩽i⩽n. Let d>0 be a real number, we define X(d)=min{X,d}. Assume that Yi:=X(d)i, 1⩽i⩽n satisfy (2.2) for all t>0. Then, for all x>0, we have
V(Sn⩾x)⩽V(max1⩽i⩽nXi>d)+g(n)exp(xd−xdln(1+xd∑ni=1ˆEXi2)). |
Next, we give the theorems and proof in this article.
Let {Xn;n⩾1} be a sequence of random variables in sub-linear expectation space (Ω,H,ˆE),α>1/122,αp>1,ε>0,δ>0 and β1=[α(p∧2)−1]ε4(αp−1+δ)>0. For fixed n⩾1, denote for 1⩽i⩽n that
Yi=−β1nαI(Xi<−β1nα)+XiI(|Xi|⩽β1nα)+β1nαI(Xi>β1nα). | (3.1) |
Theorem 3.1 Let α>1/122,αp>1 and {Xn;n⩾1} be a sequence of random variables in (Ω,H,ˆE) with ˆEXi=ˆεXi=0 if p>1 such that sequence {Yi;1⩽i⩽n} of truncated random variables is WA and control coefficient g(n) in (2.2) is regularly varying function with index δ for some δ>0. Assume that {ani;1⩽i⩽n,n⩾1} is an array of real numbers and there exist some q with q>max{2,p} have
n∑i=1|ani|q=O(n), |ani|⩽c | (3.2) |
and there also exist a random variable X∈H and a constant c satisfying
ˆE[f(Xn)]⩽cˆE[f(X)], n⩾1, 0⩽f∈Cl,Lip(R), | (3.3) |
then
ˆE|X|p⩽CV(|X|p)<∞, | (3.4) |
implies that for all ε>0
∞∑n=1nαp−2V(|n∑i=1aniXi|>εnα)<∞. | (3.5) |
Let 0<β2<min{2∧(p∨r)2r,α[2∧(p∨r)]−12(αp−1+δ)}. For any 1⩽i⩽n,n⩾1, and t⩾nαr, denote
Y′i=−β2t1/rI(Xi<−β2t1/r)+XiI(|Xi|⩽β2t1/r)+β2t1/rI(Xi>β2t1/r). | (3.6) |
Theorem 3.2. Let r>0,α>1/122,α(p∨r)>1 and {Xn;n⩾1} be a sequence of random variables in (Ω,H,ˆE) with ˆEXi=ˆεXi=0 if p∨r>1 such that sequence {Y′i;1⩽i⩽n} of truncated random variables is WA and control coefficient g(n) in (2.2) is regularly varying function with index δ for some δ>0. Assume that {ani;1⩽i⩽n,n⩾1} is an array of real numbers and condition (3.2) holds for q>max{2,p∨r}, moreover, the condition (3.3) is also true, then
{ˆE|X|p∨r⩽CV(|X|p∨r)<∞ if r≠p;ˆE|X|plog|X|⩽CV(|X|plog|X|)<∞ if r=p; | (3.7) |
implies that for any ε>0,
∞∑n=1nαp−αr−2CV(|n∑i=1aniXi|−εnα)r+<∞. | (3.8) |
Remark. In Theorem 3.2, we extend the complete moment convergence for the weighted sums of random variables in the probability space of article [21] to the complete integral convergence for the weighted sums of WA random variables in sublinear expectation space.
Proof of Theorem 3.1. Since n∑i=1aniXi=n∑i=1a+niXi−n∑i=1a−niXi, we have
∞∑n=1nαp−2V(|n∑i=1aniXi|>εnα)⩽∞∑n=1nαp−2V(|n∑i=1a+niXi|>εnα2)+∞∑n=1nαp−2V(|n∑i=1a−niXi|>εnα2). |
So, without loss of generality, we can assume that ani⩾0 for 1⩽i⩽n and n⩾1.
If we want to prove (3.5), we just need to prove
∞∑n=1nαp−2V(n∑i=1aniXi>εnα)<∞,∀ε>0. | (3.9) |
Because of considering {−Xn;n⩾1} still satisfies the conditions in the Theorem 3.1, we can obtain
∞∑n=1nαp−2V(n∑i=1aniXi<−εnα)<∞,∀ε>0. | (3.10) |
Form (3.9) and (3.10), we can get (3.5). The following proves that (3.9) is established. The definition of {Yi;1⩽i⩽n} is (3.1). For fixed n⩾1, denote for 1⩽i⩽n that
Zi=Xi−Yi=(Xi+β1nα)I(Xi<−β1nα)+(Xi−β1nα)I(Xi>β1nα). |
It is easily checked that for ∀ε>0,
(n∑i=1aniXi>εnα)⊂n⋃i=1(|Xi|>β1nα)∪(n∑i=1aniYi>εnα). | (3.11) |
So, we have
∞∑n=1nαp−2V(n∑i=1aniXi>εnα)⩽∞∑n=1nαp−2n∑i=1V(|Xi|>β1nα)+∞∑n=1nαp−2V(n∑i=1aniYi>εnα)⩽∞∑n=1nαp−2n∑i=1V(|Xi|>β1nα)+∞∑n=1nαp−2V(n∑i=1ani(Yi−ˆEYi)>εnα−|n∑i=1aniˆEYi|):=I1+I2. |
To prove (3.9), it suffices to show I1<∞ and I2<∞.
For 0<μ<1, let g(x)∈Cl,Lip(R) be a decreasing function when x⩾0 such that 0⩽g(x)⩽1 for all x and g(x)=1 if |x|⩽μ, g(x)=0 if |x|⩾1. Then
I(|x|⩽μ)⩽g(|x|)⩽I(|x|⩽1),I(|x|>1)⩽1−g(|x|)⩽I(|x|>μ). | (3.12) |
By (3.12), Lemma 2.7 (i) and (3.3), we can get that
I1⩽∞∑n=1nαp−2n∑i=1ˆE(1−g(|Xi|β1nα)) ⩽c∞∑n=1nαp−1ˆE(1−g(|X|β1nα)) ⩽c∞∑n=1nαp−1V(|X|>cnα) <∞. |
In the following, we prove that I2<∞. Firstly, we will show that
n−α|n∑i=1aniˆEYi|→0, n→∞. |
By (3.2) and Hӧlder inequality, we have for any 0<ρ<q that
n∑i=1aniρ⩽(n∑i=1(aniq))ρ/ρqq(n∑i=11)1−ρ/ρqq⩽cn. | (3.13) |
For any λ>0, by (3.12) and Cr inequality, we have
|Yi|λ≪|Xi|λI(|Xi|≤β1nα)+β1λnαλI(|Xi|>β1nα)≤|Xi|λg(μ|Xi|β1nα)+β1λnαλ(1−g(|Xi|β1nα)),|Zi|λ≪|Xi+β1nα|λI(Xi<−β1nα)+|Xi−β1nα|λI(Xi>β1nα)≤|Xi|λ(1−g(|Xi|β1nα)). |
Thus
ˆE|Yi|λ≪ˆE|X|λg(μ|X|β1nα)+β1λnαλˆE(1−g(|X|β1nα)) |
≤ˆE|X|λg(μ|X|β1nα)+β1λnαλV(|X|>μβ1nα), |
ˆE|Zi|λ≪ˆE|Xi|λ(1−g(|Xi|β1nα))≪ˆE|X|λ(1−g(|X|β1nα)). | (3.14) |
By Lemma 2.7 (i), we can get that
∞∑n=1V(|X|>cnα)⩽∞∑n=1nαp−1V(|X|>cnα)<∞, |
and V(|X|>cnα)↓, so we get nV(|X|>cnα)→0 as n→∞.
When 0<p⩽1. Since q>max{2,p}>1, by (3.13), (3.14) and αp>1, we have that
n−α|n∑i=1aniˆEYi|≤n−αn∑i=1aniˆE|Yi| |
≪n1−α(ˆE|X|g(μ|X|β1nα)+β1nαV(|X|>cnα)) |
=n1−αˆE|X|g(μ|X|β1nα)+β1nV(|X|>cnα) |
⩽cn1−αpˆE|X|p→0, n→∞. |
When p>1. Since q>p>1, by (3.13), (3.14) and ˆEXi=0, we have
n−α|n∑i=1aniˆEYi|≤n−αn∑i=1aniˆE|Xi−Yi|=n−αn∑i=1aniˆE|Zi|≪n1−αˆE|X|(1−g(|X|β1nα))≤cn1−αpˆE|X|p→0,n→∞. |
Hence, n−α|n∑i=1aniˆEYi|⩽ε/ε22 for all n large enough, which implies that
I2⩽∞∑n=1nαp−2V(n∑i=1ani(Yi−ˆEYi)>εnα2). |
According to assume that sequence {Yi;1⩽i⩽n} of truncated random variables is WA and ani⩾0, by (2.2), we have
ˆEexp(n∑i=1taniYi)≤g(n)n∏i=1ˆEexp(taniYi). |
Because exp(−n∑i=1taniˆEYi)⩾0, we can get that
ˆEexp(n∑i=1tani(Yi−ˆEYi))=exp(−n∑i=1taniˆEYi)ˆEexp(n∑i=1taniYi) ≤n∏i=1exp(−taniˆEYi)g(n)n∏i=1ˆEexp(taniYi) =g(n)n∏i=1ˆEexp(tani(Yi−ˆEYi)), |
which means that ani(Yi−ˆEYi) are WA random variables. Without loss of generality, according to (3.2), we assume that ani⩽1/122, then
ani(Yi−ˆEYi)⩽ani(|Yi|+ˆE|Yi|)⩽2aniβ1nα⩽β1nα. |
We can verify that ani(Yi−ˆEYi)=min{ani(Yi−ˆEYi), β1nα}.
So {ani(Yi−ˆEYi);1⩽i⩽n,n⩾1} satisfy the conditions in Lemma 2.8 with ˆE(ani(Yi−ˆEYi))=0. Taking x=εnα2,d=β1nα=[α(p∧2)−1]εnα4(αp−1+δ) in Lemma 2.8, we obtain
I2⩽∞∑n=1nαp−2V(n∑i=1ani(Yi−ˆEYi)>εnα2) |
⩽∞∑n=1nαp−2[V(max1⩽i⩽n(ani(Yi−ˆEYi))>d)+g(n)exp(εnα2d−εnα2dln(1+εnα2dn∑i=1ˆE|ani(Yi−ˆEYi)|2))] ⩽∞∑n=1nαp−2n∑i=1V(|ani(Yi−ˆEYi)|>β1nα)+c∞∑n=1nαp−2g(n)(n−2αn∑i=1ˆE|ani(Yi−ˆEYi)|2)ε2β1 :=I21+I22. |
Let β>0, gjβ(x)∈Cl,Lip(R), j⩾1, suppose gjβ(x) is an even function, such that 0⩽gjβ(x)⩽1 for all x and gjβ(x)=1 if β2(j−1)α/β2(j−1)αμμ⩽|x|⩽β2jα/β2jαμμ, gjβ(x)=0 if |x|<β2(j−1)α or |x|>(1+μ)β2jα/(1+μ)β2jαμμ. Then for any l>0,
gjβ(X)⩽I(β2α(j−1)<|X|⩽(1+μ)β2αj/(1+μ)β2αjμμ), |X|lg(μ|X|β2αk)⩽βlμl+k∑j=1|X|lgjβ(X). | (3.15) |
The truncation that defines Y as X is as follows
Y=−β1nαI(X<−β1nα)+XI(|X|⩽β1nα)+β1nαI(X>β1nα). |
According to Markov inequality, Cr inequality, (3.2), (3.3), (3.14), (3.15), Lemma 2.7, q>p and g(x)↓ when x⩾0. Then
I21≪∞∑n=1nαp−2⋅n−αqn∑i=1aniqˆE|Yi|q ≪∞∑n=1nαp−αq−1ˆE|Y|q ≪∞∑n=1nαp−αq−1ˆE|X|qg(μ|X|β1nα)+∞∑n=1nαp−1V(|X|>cnα) ≪∞∑k=12k−1∑n=2k−1nαp−αq−1ˆE|X|qg(μ|X|β1nα) ≪∞∑k=12k(p−q)αˆE|X|qg(μ|X|β12kα) ⩽∞∑k=12k(p−q)αˆE(β1qμq+k∑j=1|X|qgjβ1(X)) ⩽c∞∑k=12k(p−q)α+∞∑k=12k(p−q)αk∑j=1ˆE|X|qgjβ1(X) ≪∞∑j=1ˆE|X|qgjβ1(X)∞∑k=j2k(p−q)α ≪∞∑j=12j(p−q)αˆE|X|qgjβ1(X) ≪∞∑j=12αpjV(|X|>c2jα)<∞. |
Next, we prove I22<∞. If p⩾2, then d=(2α−1)εnα4(αp−1+δ), by (3.3), (3.4), (3.13), αp>1, Cr inequality, the condition of g(n), there exist a slowly varying function L(n), such that g(n) = nδL(n), we have
I22≪∞∑n=1nαp−2g(n)(n−2αn∑i=1ani2ˆEY2i)ε2β1 ⩽c∞∑n=1nαp−2g(n)(n1−2αˆEX2)ε2β1 ⩽c∞∑n=1nαp−2+δ−2(αp−1+δ)L(n) ⩽c∞∑n=1n−αpL(n)<∞. |
If p<2, then d=(αp−1)εnα4(αp−1+δ), by (3.3), (3.4), (3.13), αp>1, Cr inequality, we have
I22≪∞∑n=1nαp−2g(n)(n−2αn∑i=1ani2ˆEY2i)ε2β1 ⩽c∞∑n=1nαp−2g(n)[n−2αn∑i=1ani2⋅(nα(2−p)ˆE|X|p)]ε2β1 ⩽c∞∑n=1nαp−2+δ−2(αp−1+δ)L(n) ⩽c∞∑n=1n−αpL(n)<∞. |
Hence, the proof of Theorem 3.1 is completed.
Proof of Theorem 3.2. Without loss of generality, we also can assume that ani⩾0 for 1⩽i⩽n and n⩾1. Where, the definitions of g(x) and gjβ(x) are the same as in the proof of Theorem 3.1. For ∀ε>0, we have that
∞∑n=1nαp−αr−2CV(|n∑i=1aniXi|−εnα)r+=∞∑n=1nαp−αr−2∫∞0V(|n∑i=1aniXi|−εnα>t1/r) dt=∞∑n=1nαp−αr−2∫nαr0V(|n∑i=1aniXi|−εnα>t1/r) dt+∞∑n=1nαp−αr−2∫∞nαrV(|n∑i=1aniXi|−εnα>t1/r) dt⩽∞∑n=1nαp−2V(|n∑i=1aniXi|>εnα)+∞∑n=1nαp−αr−2∫∞nαrV(|n∑i=1aniXi|>t1/r) dt:=J1+J2. |
According to Theorem 3.1, we have J1<∞. So if we want to prove (3.8), we just need to prove J2<∞. Hence, we first to prove
H:=∞∑n=1nαp−αr−2∫∞nαrV(n∑i=1aniXi>t1/r) dt. | (3.16) |
The definition of {Y′i;1⩽i⩽n} is (3.6). For any 1⩽i⩽n,n⩾1, and t⩾nαr, denote
Z′i=(Xi+β2t1/r)I(Xi<−β2t1/r)+(Xi−β2t1/r)I(Xi>β2t1/r). | (3.17) |
We have
H⩽∞∑n=1nαp−αr−2∫∞nαrn∑i=1V(|Xi|>β2t1/r)dt+∞∑n=1nαp−αr−2∫∞nαrV(n∑i=1aniY′i>t1/r)dt ⩽∞∑n=1nαp−αr−2∫∞nαrn∑i=1V(|Xi|>β2t1/r)dt+∞∑n=1nαp−αr−2∫∞nαrV(n∑i=1ani(Y′i−ˆEY′i)>t1/r−|n∑i=1aniˆEY′i|)dt :=H1+H2. |
In order to prove H<∞, it suffices to show H1<∞ and H2<∞. Firstly, we prove H1<∞, by (3.7), (3.12), Lemma 2.7 (i), g(x)↓ when x⩾0, we have
H1⩽∞∑n=1nαp−αr−2∫∞nαrn∑i=1ˆE(1−g(|Xi|β2t1/r))dt ≪∞∑n=1nαp−αr−1∫∞nαrˆE(1−g(|X|β2t1/r))dt =∞∑n=1nαp−αr−1∞∑m=n∫(m+1)αrmαrˆE(1−g(|X|β2t1/r))dt ≪∞∑n=1nαp−αr−1∞∑m=nmαr−1ˆE(1−g(|X|β2mα)) = ∞∑m=1mαr−1ˆE(1−g(|X|β2mα))m∑n=1nαp−αr−1 ≪{∞∑m=1mαp−1V(|X|>μβ2mα) if r<p;∞∑m=1mαp−1logmV(|X|>μβ2mα) if r=p;∞∑m=1mαr−1V(|X|>μβ2mα) if r>p; = {∞∑m=1mα(p∨r)−1V(|X|>cmα)<∞ if r≠p;∞∑m=1mαp−1logmV(|X|>cmα)<∞ if r=p. |
Then, we prove H2<∞. Firstly, we will show that
supt⩾nαrt−1/r|n∑i=1aniˆEY′i|→0, n→∞. |
Similar to (3.14), by (3.12), (3.17), Cr inequality, for any λ>0, we can get that
ˆE|Y′i|λ≪ˆE|X|λg(μ|X|β2t1/r)+β2λtλ/rV(|X|>μβ2t1/r), ˆE|Z′i|λ≪ˆE|Xi|λ(1−g(|Xi|β2t1/r))≪ˆE|X|λ(1−g(|X|β2t1/r)). | (3.18) |
The truncation that defines Y′ as X is as follows
Y′=−β2t1/rI(X<−β2t1/r)+XI(|X|⩽β2t1/r)+β2t1/rI(X>β2t1/r). |
When 0<p∨r⩽1. Since t⩾nαr, ˆE|X|p∨r<∞, and α(p∨r)>1, we get
supt⩾nαrt−1/r|n∑i=1aniˆEY′i|≪supt⩾nαrt−1/rnˆE|Y′| ≪supt⩾nαrt−1/rn(ˆE|X|g(μ|X|β2t1/1rr)+β2t1/1rrV(|X|>μβ2t1/1rr)) = supt⩾nαrt−1/rn(ˆE|X|(p∨r)⋅|X|1−(p∨r)g(μ|X|β2t1/1rr)+β2t1/1rrV(|X|>μβ2t1/1rr)) ⩽cn1−α(p∨r)ˆE|X|p∨r+β2nV(|X|>cnα)→0, n→∞. |
When p∨r>1. Since ˆEXi=0 and t⩾nαr, we can get that
supt⩾nαrt−1/r|n∑i=1aniˆEY′i|⩽supt⩾nαrt−1/rn∑i=1ani|ˆEXi−ˆEY′i| ⩽supt⩾nαrt−1/rn∑i=1aniˆE|Z′i| ⩽cn1−αˆE|X|(1−g(|X|β2nα)) ⩽cn1−α⋅ˆE|X|⋅|X|p∨r−1nα(p∨r−1)(1−g(|X|μβ2nα)) ⩽cn1−α(p∨r)ˆE|X|p∨r→0, n→∞. |
It follows that for all n large enough,
supt⩾nαrt−1/r|n∑i=1aniˆEY′i|<12, |
which imply that
H2⩽∞∑n=1nαp−αr−2∫∞nαrV(n∑i=1ani(Y′i−ˆEY′i)>t1/r2)dt. |
For fixed t⩾nαr and n⩾1, through the definition in Theorem 3.2 and assume that ani⩽1/122, we know that {ani(Y′i−ˆEY′i);1⩽i⩽n,n⩾1} are WA random variables with ˆE(ani(Y′i−ˆEY′i)) = 0 and ani(Yi′−ˆEY′i)=min{ani(Yi′−ˆEY′i), β2t1/1rr}. Use Lemma 2.8 for V(n∑i=1ani(Y′i−ˆEY′i)>t1/r/t1/r22), taking 0<β2<min{2∧(p∨r)2r,α[2∧(p∨r)]−12(αp−1+δ)}, d=β2t1/1rr, x=t1/1rr/t1/1rr22, we have
V(n∑i=1ani(Y′i−ˆEY′i)>t1/r2),⩽V(max1⩽i⩽n(ani(Y′i−ˆEY′i))>d)+g(n)exp(xd−xdln(1+xdn∑i=1ˆE|ani(Y′i−ˆEY′i)|2))⩽n∑i=1V(|ani(Y′i−ˆEY′i)|>ct1/r)+cg(n)(t−2/rn∑i=1ˆE|ani(Y′i−ˆEY′i)|2)12β2, |
thus
H2⩽∞∑n=1nαp−αr−2∫∞nαrn∑i=1V(|ani(Y′i−ˆEY′i)|>ct1/r)dt +c∞∑n=1αp−αr−2g(n)∫∞nαr(t−2/rn∑i=1ˆE|ani(Y′i−ˆEY′i)|2)12β2dt :=H21+H22. |
So, to prove H2<∞, we first need to prove H21<∞. By Markov inequality, Cr inequality, (3.12), (3.15), (3.17), Lemma 2.7 (ii), q>p∨r and H1<∞, we have that
H21≪∞∑n=1nαp−αr−2∫∞nαr(t−q/−qrrn∑i=1aniqˆE|Y′i|q)dt ≪∞∑n=1nαp−αr−1∫∞nαrt−q/−qrr(ˆE|X|qg(μ|X|β2t1/r)+β2qtq/rˆE(1−g(|X|μβ2t1/r)))dt ⩽∞∑n=1nαp−αr−1∫∞nαrt−q/−qrrˆE|X|qg(μ|X|β2t1/r)dt+c∞∑n=1nαp−αr−1∫∞nαrˆE(1−g(|X|μβ2t1/r))dt ≪∞∑n=1nαp−1−αr∞∑m=n∫(m+1)αrmαrt−q/rˆE|X|qg(μ|X|β2t1/r)dt ≪∞∑n=1nαp−1−αr∞∑m=nmαr−αq−1ˆE|X|qg(μ|X|β2(m+1)α) =∞∑m=1mαr−αq−1ˆE|X|qg(μ|X|β2(m+1)α)m∑n=1nαp−1−αr ≪{∞∑m=1mα(p∨r)−αq−1ˆE|X|qg(μ|X|β2(m+1)α) if r≠p;∞∑m=1mαr−αq−1logmˆE|X|qg(μ|X|β2(m+1)α) if r=p; |
={∞∑k=12k−1∑m=2k−1mα(p∨r)−αq−1ˆE|X|qg(μ|X|β2(m+1)α) if r≠p;∞∑k=12k−1∑m=2k−1mαr−αq−1logmˆE|X|qg(μ|X|β2(m+1)α) if r=p; ≪{∞∑k=12k[α(p∨r)−αq]ˆE|X|qg(μ|X|β22kα) if r≠p;∞∑k=12k(αr−αq)log2kˆE|X|qg(μ|X|β22kα) if r=p; ≪{∞∑k=12k[α(p∨r)−αq]ˆE(β2qμq+k∑j=1|X|qgjβ2(X)) if r≠p;∞∑k=12k(r−q)αlog2kˆE(β2qμq+k∑j=1|X|qgjβ2(X)) if r=p; ≪{∞∑k=12k[α(p∨r)−αq]+∞∑k=12k[α(p∨r)−αq]k∑j=1ˆE|X|qgjβ2(X) if r≠p;∞∑k=12k(r−q)αlog2k+∞∑k=12k(r−q)αlog2kk∑j=1ˆE|X|qgjβ2(X) if r=p; ≪{∞∑j=1ˆE|X|qgjβ2(X)∞∑k=j2k[α(p∨r)−αq] if r≠p;∞∑j=1ˆE|X|qgjβ2(X)∞∑k=j2k(r−q)αlog2k if r=p; ≪{∞∑j=12α(p∨r)jV(|X|>c2jα)<∞ if r≠p;∞∑j=12αrjlog2jV(|X|>c2jα)<∞ if r=p. |
Then, we prove H22<∞. Similar to previous proof, we consider the following two situations.
If (p∨r)⩾2. By β2<1r,αp−2+δ+1−2α2β2<−1, (3.3), (3.7), (3.13), Cr inequality, we have
H22≪∞∑n=1nαp−αr−2g(n)∫∞nαr(t−2/rn∑i=1ani2ˆEY′i2)12β2dt ⩽c∞∑n=1nαp−αr−2+12β2g(n)∫∞nαr(t−2/rˆEX2)12β2dt ⩽c∞∑n=1nαp−αr−2+12β2g(n)∫∞nαrt−1rβ2dt |
⩽c∞∑n=1nαp−2+δ+1−2α2β2L(n)<∞. |
If (p∨r)<2. By β2<p∨r2r,αp−2+δ+1−(p∨r)α2β2<−1, ˆE|X|p∨r<∞, Cr inequality, we have
H22≪∞∑n=1nαp−αr−2g(n)∫∞nαr(t−2/rn∑i=1ani2ˆEY′i2)12β2dt ⩽c∞∑n=1nαp−αr−2g(n)∫∞nαr(t−p∨rrn∑i=1ani2ˆE|X|p∨r)12β2dt ⩽c∞∑n=1nαp−αr−2+12β2g(n)∫∞nαrt−p∨r2rβ2dt ⩽c∞∑n=1nαp−2+δ+1−(p∨r)α2β2L(n)<∞. |
We have proved (3.16). Because of considering {−Xn;n⩾1} instead of {Xn;n⩾1} in Theorem 3.2, Theorem 3.2 still holds. Then we can obtain
∞∑n=1nαp−αr−2∫∞nαrV(n∑i=1aniXi<−t1/r) dt<∞. | (3.19) |
According to (3.16) and (3.19), we can get J2<∞. Hence, the finishes the proof of Theorem 3.2.
In conclusion, we prove the complete convergence and complete integral convergence for weighted sums of WA random variables under the sub-linear expectations.
In this paper, we extend the conclusion in probability space to sublinear expectation space and obtain the complete convergence and complete integral convergence for weighted sums of WA random variables under the sub-linear expectations, which enriches the limit theory research of WA random variable sequence in sublinear expectation space. In the future work, we will establish the corresponding inequalities in the sublinear expectation space according to the existing important inequalities and moment inequalities in the probability space, overcome the problems caused by the sub-additive of V and ˆE, and generalize the complete convergence and complete integral convergence in the sublinear expectation to obtain a conclusion similar to that in the original probability space.
This paper was supported by the National Natural Science Foundation of China (12061028) and Guangxi Colleges and Universities Key Laboratory of Applied Statistics.
All authors declare no conflicts of interest in this paper.
[1] |
S. G. Peng, G. Expectation, G-Brownian motion and related stochastic calculus of Itô type, Stoch. Anal. Appl., 2 (2006), 541-567. http://doi.org/10.1007/978-3-540-70847-6_25 doi: 10.1007/978-3-540-70847-6_25
![]() |
[2] |
S. G. Peng, Multi-dimensional G-Brownian motion and related stochastic calculus under G-expectation, Stoch. Proc. Appl., 118 (2008), 2223-2253. http://doi.org/10.1016/j.spa.2007.10.015 doi: 10.1016/j.spa.2007.10.015
![]() |
[3] |
S. G. Peng, Survey on normal distributions, central limit theorem, Brownian motion and the related stochastic calculus under sublinear expectations, Sci. China Ser. A-Math., 52 (2009), 1391-1411. http://doi.org/10.1007/s11425-009-0121-8 doi: 10.1007/s11425-009-0121-8
![]() |
[4] |
L. X. Zhang, Rosenthal's inequalities for independent and negatively dependent random variables under sub-linear expectations with applications, Sci. China-Math., 59 (2016), 751-768. http://doi.org/10.1007/s11425-015-5105-2 doi: 10.1007/s11425-015-5105-2
![]() |
[5] |
L. X. Zhang, Exponential inequalities under the sub-linear expectations with applications to laws of the iterated logarithm, Sci. China-Math., 59 (2016), 2503-2526. http://doi.org/10.1007/s11425-016-0079-1 doi: 10.1007/s11425-016-0079-1
![]() |
[6] |
L. X. Zhang, Self-normalized moderate deviation and laws of the iterated logarithm under g-expectation, Commun. Math. Stat., 4 (2016), 229-263. https://doi.org/10.1007/s40304-015-0084-8 doi: 10.1007/s40304-015-0084-8
![]() |
[7] | L. X. Zhang, Strong limit theorems for extended independent and extended negatively dependent random variables under non-linear expectations, 2016. Available from: http://arXiv.org/abs/1608.00710v1. |
[8] |
Q. Y. Wu, Y. Y. Jiang, Strong law of large numbers and Chover's law of the iterated logarithm under sub-linear expectations, J. Math. Anal. Appl., 460 (2017), 252-270. http://doi.org/10.1016/j.jmaa.2017.11.053 doi: 10.1016/j.jmaa.2017.11.053
![]() |
[9] |
P. L. Hsu, H. Robbins, Complete convergence and the law of large numbers, P. Natl. A. Sci. USA, 33 (1947), 25-31. http://doi.org/10.1073/pnas.33.2.25 doi: 10.1073/pnas.33.2.25
![]() |
[10] | Y. S. Chow, On the rate of moment convergence of sample sums and extremes, Bull. Inst. Math. Acad. Sinica, 16 (1988), 177-201. |
[11] |
D. H. Qiu, P. Y. Chen, Complete and complete moment convergence for i.i.d. random variables under exponential moment conditions, Commun. Stat.-Theory Methods, 46 (2017), 4510-4519. http://doi.org/10.1080/03610926.2015.1085566 doi: 10.1080/03610926.2015.1085566
![]() |
[12] |
Q. Y. Wu, Y. Y. Jiang, Complete convergence and complete moment convergence for negatively associated sequences of random variables, J. Inequal. Appl., 2016 (2016), 157. http://doi.org/10.1186/s13660-016-1107-z doi: 10.1186/s13660-016-1107-z
![]() |
[13] |
A. T. Shen, Y. Zhang, W. J. Wang, Complete convergence and complete moment convergence for extended negatively dependent random variables, Filomat, 31 (2017), 1381-1394. http://doi.org/10.2298/FIL1705381S doi: 10.2298/FIL1705381S
![]() |
[14] |
Q. Y. Wu, Y. Y. Jiang, Complete convergence and complete moment convergence for negatively dependent random variables under sub-linear expectations, Filomat, 34 (2020), 1093-1104. http://doi.org/10.2298/FIL2004093W doi: 10.2298/FIL2004093W
![]() |
[15] |
F. X. Feng, D. C. Wang, Q. Y. Wu, H. W. Huang, Complete and complete moment convergence for weighted sums of arrays of row wise negatively dependent random variables under the sub-linear expectations, Commun. Stat.-Theory Methods, 50 (2021), 594-608. http://doi.org/10.1080/03610926.2019.1639747 doi: 10.1080/03610926.2019.1639747
![]() |
[16] |
Z. W. Liang, Q. Y. Wu, Theorems of complete convergence and complete integral convergence for END random variables under sub-linear expectations, J. Inequal. Appl., 2019 (2019), 114. http://doi.org/10.1186/s13660-019-2064-0 doi: 10.1186/s13660-019-2064-0
![]() |
[17] |
H. Y. Zhong, Q. Y. Wu, Complete convergence and complete moment convergence for weighted sums of extended negatively dependent random variables under sub-linear expectation, J. Inequal. Appl., 2017 (2017), 261. http://doi.org/10.1186/s13660-017-1538-1 doi: 10.1186/s13660-017-1538-1
![]() |
[18] |
D. W. Lu, Y. Meng, Complete and complete integral convergence for arrays of row wise widely negative dependent random variables under the sub-linear expectations, Commun. Stat.-Theory Methods, 2020, 1786585. http://doi.org/10.1080/03610926.2020.1786585 doi: 10.1080/03610926.2020.1786585
![]() |
[19] |
K. Anna, Complete convergence for widely acceptable random variables under sublinear expectations, J. Math. Anal. Appl., 484 (2020), 123662. http://doi.org/10.1016/j.jmaa.2019.123662 doi: 10.1016/j.jmaa.2019.123662
![]() |
[20] |
R. Hu, Q. Y. Wu, Complete convergence for weighted sums of widely acceptable random variables under sublinear expectations, Discrete Dyn. Nature Soc., 2021 (2021), 5526609. http://doi.org/10.1155/2021/5526609 doi: 10.1155/2021/5526609
![]() |
[21] |
M. M. Ge, X. Deng, Complete moment convergence for weighted sums of extended negatively dependent random variables, J. Math. Inequal., 13 (2019), 159-175. http://doi.org/10.7153/jmi-2019-13-12 doi: 10.7153/jmi-2019-13-12
![]() |
[22] | E. Seneta, Regularly varying functions, Lecture Notes in Mathematics, Springer, Berlin, Heidelberg, 508 (1976), 1-52. https://doi.org/10.1007/BFb0079658 |
1. | He Dong, Xili Tan, Yong Zhang, Complete convergence and complete integration convergence for weighted sums of arrays of rowwise $ m $-END under sub-linear expectations space, 2023, 8, 2473-6988, 6705, 10.3934/math.2023340 |