In the paper, the complete convergence and complete integral convergence for weighted sums of negatively dependent random variables under the sub-linear expectations are established. The results in the paper extend some complete moment convergence theorems from the classical probability space to the situation of sub-linear expectation space.
Citation: Lunyi Liu, Qunying Wu. Complete integral convergence for weighted sums of negatively dependent random variables under sub-linear expectations[J]. AIMS Mathematics, 2023, 8(9): 22319-22337. doi: 10.3934/math.20231138
[1] | Shuyan Li, Qunying Wu . Complete integration convergence for arrays of rowwise extended negatively dependent random variables under the sub-linear expectations. AIMS Mathematics, 2021, 6(11): 12166-12181. doi: 10.3934/math.2021706 |
[2] | Mingzhou Xu . Complete convergence and complete moment convergence for maximal weighted sums of extended negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(8): 19442-19460. doi: 10.3934/math.2023992 |
[3] | He Dong, Xili Tan, Yong Zhang . Complete convergence and complete integration convergence for weighted sums of arrays of rowwise m-END under sub-linear expectations space. AIMS Mathematics, 2023, 8(3): 6705-6724. doi: 10.3934/math.2023340 |
[4] | Chengcheng Jia, Qunying Wu . Complete convergence and complete integral convergence for weighted sums of widely acceptable random variables under the sub-linear expectations. AIMS Mathematics, 2022, 7(5): 8430-8448. doi: 10.3934/math.2022470 |
[5] | Mingzhou Xu, Kun Cheng, Wangke Yu . Complete convergence for weighted sums of negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2022, 7(11): 19998-20019. doi: 10.3934/math.20221094 |
[6] | Mingzhou Xu, Xuhang Kong . Note on complete convergence and complete moment convergence for negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(4): 8504-8521. doi: 10.3934/math.2023428 |
[7] | Xiaocong Chen, Qunying Wu . Complete convergence and complete integral convergence of partial sums for moving average process under sub-linear expectations. AIMS Mathematics, 2022, 7(6): 9694-9715. doi: 10.3934/math.2022540 |
[8] | Mingzhou Xu . Complete convergence of moving average processes produced by negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(7): 17067-17080. doi: 10.3934/math.2023871 |
[9] | Mingzhou Xu . On the complete moment convergence of moving average processes generated by negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2024, 9(2): 3369-3385. doi: 10.3934/math.2024165 |
[10] | Haiwu Huang, Yuan Yuan, Hongguo Zeng . An extension on the rate of complete moment convergence for weighted sums of weakly dependent random variables. AIMS Mathematics, 2023, 8(1): 622-632. doi: 10.3934/math.2023029 |
In the paper, the complete convergence and complete integral convergence for weighted sums of negatively dependent random variables under the sub-linear expectations are established. The results in the paper extend some complete moment convergence theorems from the classical probability space to the situation of sub-linear expectation space.
Probability limit theory is an important research topic in mathematical statistics that has found extensive application in the fields of mathematics, statistics, and finance. However, the limitations of classical limit theory have become increasingly apparent with the application of limit theory in finance, risk measurement and other areas. In situations where the mathematical model is characterized by uncertainty, the analysis and computation of sub-linearity becomes feasible. To address this issue, academician Peng [1,2,3] put forward the concept of sub-linear expectation space, constructed the complete theoretical system of sub-linear expectation space and effectively solved the limitation of traditional probability space theory in statistics, economics, and other fields. In recent years, an increasing number of scholars have conducted extensive research in this field, yielding numerous relevant findings. Notably, Peng [1,2,3] and Zhang [4,5,6] have derived a series of significant conclusions, including the law of large numbers of strong numbers, the exponential inequality and Rosenthal's inequality under sub-linear expectations. These findings have established a solid groundwork for investigating of the limit theory of sub-linear expectation spaces. The results obtained by Peng and Zhang have greatly contributed to the advancement of our understanding of the sub-linear expectation space theorem.
The concepts of complete convergence and complete moment convergence hold significant importance in the probability limit theory. The theory of complete convergence was initially introduced by Hsu and Robbins [7]. Chow [8] introduced the concept of complete convergence of independent random variables, which has since been expanded upon. As a result of complete convergence, complete moment convergence is more accurate, prompting a further investigation by scholars. Qiu and Chen [9] established the complete moment convergence for independent and identically distributed random variables, while Yang and Hu [10] demonstrated the complete moment convergence for pairwise NQD random variables. Song and Zhu [11] derived the complete convergence theorem for extended negatively dependent random variables. Notably, in the sub-linear expectation space, the complete moment convergence is equivalent to the complete integral convergence. In recent years, an increasing number of scholars have conducted research on the topics of complete convergence and complete integral convergence within the context of sub-linear expectations, thereby significantly augmenting the associated theoretical frameworks. For example, Li and Wu [12] conducted a study on the convergence of complete integrals for arrays of row-wise extended negatively dependent random variables. Similarly, Lu and Weng [13] examined the complete and complete integral convergence of arrays consisting of row-wise widely negative dependent random variables. Additionally, Chen and Wu [14] investigated the complete convergence and complete integral convergence of partial sums for the moving average process. It is noteworthy that complete convergence and complete integral convergence with maxima under sub-linear expectation spaces are only valid when the sequences are independent or negatively dependent. For example, Feng and Zeng [15] proved a complete convergence theorem of the maximum of partial sums under the sub-linear expectations. Xu and Kong [16,17] discussed complete convergence and complete integral convergence under negatively dependent sequences. The aforementioned findings suggest a need for further development in the field of complete integral convergence. The objective of this research is to extend the complete moment convergence characteristic, as established by Wu and Wang [18], to sub-linear space through a probabilistic approach and, subsequently, derive relevant outcomes.
The present article is structured as follows: Section 2 provides an introduction to basic notations, concepts and related properties within the context of sub-linear expectations, along with the presentation of several lemmas. Section 3 establishes complete convergence and complete integral convergence for weighted sums of negatively dependent random variables under sub-linear expectations. Finally, in Section 4, the aforementioned lemmas are utilized to demonstrate the major findings of this study. The symbol c denotes an arbitrary constant and is independent of n. The lnx is denoted as log2x in the paper and I(⋅) denotes an indicator function.
We use the framework and notions of Peng [1,2,3] and Zhang [6]. Let (Ω,F) be a given measurable space and let H be a linear space of real functions defined on (Ω,F) such that if X1,X2,…,Xn∈H, then φ(X1,…,Xn)∈H for each φ∈Cl, Lip (Rn), where φ∈Cl, Lip (Rn) denotes the linear space of (local Lipschitz) functions φ satisfying
|φ(x)−φ(y)|≤c(1+|x|m+|y|m)|x−y|,∀x,y∈Rn, |
for some c>0,m∈N depending on φ. H is considered as a space of random variable. In this case we denote X∈H.
Definition 2.1. A sub-linear expectationis ˆE on H is a function ˆE:H→[−∞,+∞] satisfying the following properties: for all X,Y∈H, we have
(a) Monotonicity: if X≥Y, then ˆE(X)≥ˆE(Y);
(b) Constant preserving: ˆE(c)=c;
(c) Sub-additivity: ˆE(X+Y)≤ˆE(X)+ˆE(Y);
(d) Positive homogeneity: ˆE(λX)=λˆE(X),λ≥0.
The triple (Ω,H,ˆE) is called a sub-linear expectation space.
Given a sub-linear expectation ˆE, let us denote the conjugate expectation ˆε of ˆE b
ˆε(X):=−ˆE(−X),∀X∈H. |
Form the definition, it is easily shown that for all X,Y∈H,
ˆε(X)≤ˆE(X),ˆE(X−Y)≥ˆE(X)−ˆE(Y),ˆE(X+c)=ˆE(X)+c, | (2.1) |
|ˆE(X−Y)|≤ˆE|X−Y|. | (2.2) |
Definition 2.2. Let G⊂F, a function V:G→[0,1] is called a capacity, if
V(∅)=0,V(Ω)=1 and V(A)≤V(B) for A⊂B,A,B∈G. |
It is called to be sub-additive if V(A∪B)≤V(A)+V(B) for all A,B∈G. In the sub-linear space (Ω,H,ˆE), we denote a pair (V,V) of capacities by
V(A):=inf{ˆE[ξ]:IA≤ξ,ξ∈H},V(A):=1−V(Ac),A∈F, |
where Ac is the complement set of A. It is obvious that V is sub-additive, and
ˆEf≤V(A)≤ˆEg, ˆεf≤V(A)≤ˆεg, iff≤I(A)≤g,f,g∈H. |
This implies Markov inequality:
V(|X|≥x)≤ˆE|X|P/xp,∀x>0,p>0. |
Form I(|X|≥x)≤|X|P/xP∈H, by Lemma 4.1 in Zhang [5], we have H¨older inequality:
∀X,Y∈H,p,q>1 satisfying p−1+q−1=1,
ˆE(|XY|)≤(ˆE(|X|p))1/p(ˆE(|X|q))1/q, |
particularly, Jensen inequality: ∀X∈H,
(ˆE(|X|r))1/r≤(ˆE(|X|s))1/s for 0<r≤s. |
Definition 2.3. We define the Choquet integrals/expectations (CV,CV) by
CV(X)=∫∞0V(X≥t)dt+∫0−∞[V(X≥t)−1]dt, |
with V being replaced by V and V respectively.
Definition 2.4. (Identical distribution) Let X1 and X2 be two n-dimensional random vectors defined, respectively, in sub-linear expectation spaces (Ω1,H1,ˆE1) and (Ω2,H2,ˆE2). They are called identically distributed if
ˆE1(φ(X1))=ˆE2(φ(X2)),∀φ∈Cl, Lip (Rn), |
whenever the sub-expectations are finite. A sequence {Xn,n≥1} of random variables is said to be identically distributed if, for each i≥1, Xi and X1 are identically distributed.
Definition 2.5. (Negative dependence) In a sub-linear expectation space (Ω,H,ˆE), a random vector Y=(Y1,⋯,Yn)(Yi∈H) is said to be negatively dependent (ND) to another random vector X=(X1,⋯,Xm)(Xi∈H) under ˆE if for each pair of test functions φ1∈Cl, Lip (Rm) and φ2∈Cl, Lip (Rn), we have
ˆE[φ1(X)φ2(Y)]≤ˆE[φ1(X)]ˆE[φ2(Y)], |
whenever φ1(X)≥0,ˆE[φ2(Y)]≥0,ˆE[|φ1(X)φ2(Y)|]<∞,ˆE[|φ1(X)|]<∞, ˆE[|φ2(Y)|]<∞ and either φ1 and φ2 are coordinatewise non-increasing or coordinatewise non-decreasing.
A sequence of random variables {Xn,n≥1} is said to be negatively dependent if Xi+1 is negatively dependent to (X1,...,Xi) for each i≥1.
It is obvious that, if {Xn,n≥1} is a sequence of negatively dependent random variables and functions f1(x),f2(x),...∈Cl,Lip(R) are all non-decreasing (resp. all non-increasing), then {fn(Xn),n≥1} is also a sequence of negatively dependent random variables.
Definition 2.6. A sub-linear expectation ˆE:H→R is called to be countably sub-additive, if
ˆE(X)≤∞∑n=1ˆE(Xn), where X≤∞∑n=1Xn,X,Xn∈H,X≥0,Xn≥0,n≥1. |
We need the following lemmas to prove the main results.
Lemma 2.1. (Zhang [5]) Suppose that Xk is negatively dependent to (Xk+1,⋯,Xn) for each k=1,…,n−1 and ˆE(Xk)≤0, then for q≥2,
ˆE[maxk≤n|Sk|q]≤cq{n∑k=1ˆE[|Xk|q]+(n∑k=1ˆE[|Xk|2])q2+(n∑k=1[(ˆεXk)−+(ˆEXk)+])q}, | (2.3) |
where cq is a positive constant depending only on q.
Lemma 2.2. Suppose X∈H, γ>0, 0<α≤2 and by=y1/αln1/γy.
(i) Then, for any c>0,
CV(|X|2ln1−2/γ|X|)<∞⇔∞∑n=1n2/α−1lnnV(|X|>cbn)<∞. | (2.4) |
(ii) If CV(|X|2ln1−2/γ|X|)<∞, then for any β>1 and c>0,
∞∑k=1β2k/αklnβV(|X|>cbβk)<∞. | (2.5) |
Proof. (i) Because
CV(|X|2ln1−2/γ|X|)<∞⇔∫∞1V(|X|2ln1−2/γ|X|>x)dx<∞. | (2.6) |
Let f(x)=x2ln1−2/γx,x>1. We define the inverse function of f(x) as f−1(x). Then, we can get
∫∞1V(|X|2ln1−2/γ|X|>x)dx=∫∞1V(|X|>f−1(x))dx. | (2.7) |
Let f−1(x)=cby=cy1/αln1/γy, for any c>0, we have
x=f(cy1/αln1/γy)=cy2/αln2/γy⋅ln1−2/γ(y1/αln1/γy). |
Let
h(y):=ln2/γy⋅ln1−2/γ(y1/αln1/γy)=c2⋅exp{∫yeg(u)udu}, |
where c2=exp{(1−2γ)lnα}, g(u)=(2γ1lnu+1−2γ1αlnu+1γlnlnu(1α+1γlnu)) and obviously g(u)→0,u→∞.
Then, for any c>0, we can get
x′=(cy2/αh(y))′=2cα⋅y2/α−1⋅h(y)+cy2/α⋅h(y)⋅g(y)y∼c⋅y2/α−1lny. |
Therefore, combining (2.7), for any c>0, we have
∫∞1V(|X|>f−1(x))dx=∫∞1V(|X|>cby)x′dy∼c∫∞1V(|X|>cby)y2/α−1lnydy. | (2.8) |
Obviously, combining (2.6)–(2.8), we can get
CV(|X|2ln1−2/γ|X|)<∞⇔∞∑n=1n2/α−1lnnV(|X|>cbn)<∞, |
hence, the proof of (i) is established.
(ii) By the proof of (i), we can get (2.4), then for any β>1 and c>0,
∞>∞∑n=1n2/α−1lnnV(|X|>cbn)≥c∞∑k=1∑βk−1≤n<βkβk(2/α−1)klnβV(|X|>cbβk)=c∞∑k=1β2k/αklnβV(|X|>cbβk), |
hence, the proof of (ii) is established.
Lemma 2.3. (Zhang [5]) If ˆE is countably sub-additive, then for X∈H,
ˆE(|X|)≤CV(|X|). | (2.9) |
Theorem 3.1. Assume that {X,Xn,n≥1} is a sequence of negatively dependent and identically distributed random variables under sub-linear expectations. Suppose that {ank,1≤k≤n,n≥1} is an array of positive real numbers and ˆE is countably sub-additive. Set bn=n1/αln1/γn, where 0<α≤2, 0<γ<2, if
CV(|X|2ln1−2/γ|X|)<∞, | (3.1) |
n∑k=1aαnk=O(n), | (3.2) |
ˆEXk=ˆεXk=0, | (3.3) |
then for any ε>0,
∞∑n=1n−1V(max1≤j≤n|j∑k=1ankXk|>bnε)<∞. | (3.4) |
Theorem 3.2. Assume that the conditions of Theorem 3.1 are satisfied, then for 0<θ<2 and ε>0,
∞∑n=1n−1CV{b−1nmax1≤j≤n|j∑k=1ankXk|−ε}θ+<∞, | (3.5) |
where + is the positive part.
Remark 3.1. Theorem 3.2 not only pushes the result of Wu and Wang[18] from probability space to sub-linear expectation space but also extends 1<α≤2 to 0<α≤2, 0<γ<α to 0<γ<2, 0<θ<α to 0<θ<2, extending the original range and enhancing the result.
For fixed n≥1 and 1≤k≤n, denote
Ynk:=−bnI(Xk<−bn)+XkI(|Xk|≤bn)+bnI(Xk>bn),Znk:=Xk−Ynk=(Xk+bn)I(Xk<−bn)+(Xk−bn)I(Xk>bn). |
We can easily see that for any ε>0,
{max1≤j≤n|j∑k=1ankXk|>bnε}⊂{∃1≤k≤n,|Xk|>bn}∪{max1≤j≤n|j∑k=1ankXk|>bnε,∀1≤k≤n,|Xk|≤bn}⊂{∃1≤k≤n,|Xk|>bn}∪{max1≤j≤n|j∑k=1ank(Ynk−ˆEYnk)|>bnε−max1≤j≤n|j∑k=1ˆE(ankYnk)|}. |
Then, we have
∞∑n=1n−1V(max1≤j≤n|j∑k=1ankXk|>bnε)≤∞∑n=1n−1n∑k=1V(|Xk|>bn)+∞∑n=1n−1V(max1≤j≤n|j∑k=1ank(Ynk−ˆEYnk)|>bnε−max1≤j≤n|j∑k=1ˆE(ankYnk)|):=I1+I2. |
In order to prove (3.4), we just need to prove
I1<∞, | (4.1) |
I2<∞. | (4.2) |
First of all, we prove (4.1). We know that in the probability space: EI(|X|≤a)=P(|X|≤a) holds, nevertheless under the sub-linear expectation space, the expression I(|x|≤a) not necessarily continuous. As a result, EI(|X|≤a) does not necessarily exist. So, we need to modify the indicator function by functions in Cl,Lip(R). We define the function g(x)∈Cl,Lip(R) as follows.
For 2−1/α<μ<1, suppose that even function g(x)∈Cl,Lip(R) and g(x) is decreasing in x≥0, such that 0≤g(x)≤1 for all x and g(x)=1 if |x|≤μ,g(x)=0 if |x|>1. Then
I(|x|≤μ)≤g(|x|)≤I(|x|≤1),I(|x|>1)≤1−g(|x|)≤I(|x|>μ). | (4.3) |
By (2.4), (3.1) and (4.3), we have
I1=∞∑n=1n−1n∑k=1V(|Xk|>bn)≤∞∑n=1n−1n∑k=1ˆE(1−g(|Xk|bn))=∞∑n=1ˆE(1−g(|X|bn))≤∞∑n=1V(|X|>μbn)<∞, |
hence, the proof of (4.1) is established.
Next, we prove (4.2). First of all, we verify that
b−1nmax1≤j≤n|j∑k=1ˆE(ankYnk)|→0, as n→∞. |
Noting that
|Znk|=|Xk+bn|I(Xk<−bn)+|Xk−bn|I(Xk>bn)≤|Xk|(1−g(|Xk|bn)). | (4.4) |
According to (3.3) and ank is non-negative, we can get
ˆE(ankXk)=ankˆEXk=0. | (4.5) |
Combining with (4.3), we have
|X|(1−g(|X|bn))≤|X|I(|X|>μbn)≤|X|2ln1−2/γ|X|μbnln1−2/γ(μbn). | (4.6) |
When 1≤α≤2, we contact (2.2), (2.9), (3.1), (3.2), (4.4)–(4.6) and lnbn∼clnn. It is easy to obtain that
b−1nmax1≤j≤n|j∑k=1ˆE(ankYnk)|≤b−1nn∑k=1|ˆE(ankYnk)|=b−1nn∑k=1|ˆE(ankXk)−ˆE(ankYnk)|≤b−1nn∑k=1ˆE|ankXk−ankYnk|≤b−1nn∑k=1ankˆE|Znk|≤b−1n(n∑k=1aαnk)1α(n∑k=11)1−1αˆE(|X|(1−g(|X|bn)))≤nb−1n(μbnln1−2/γ(μbn))−1ˆE(|X|2ln1−2/γ|X|)≤cnb2nln1−2/γbn≤c1n2/α−1lnn→0, as n→∞. | (4.7) |
When 0<α<1, according to (2.9) and (3.1), we can get ˆE(|X|)≤CV(|X|2ln1−2/γ|X|)<∞. Noting |Ynk|≤|Xk| and γ>0, we have
b−1nmax1≤j≤n|j∑k=1ˆE(ankYnk)|≤b−1nn∑k=1|ˆE(ankYnk)|=b−1nn∑k=1ankˆE|Ynk|≤b−1nn∑k=1ankˆE|X|≤n1/αb−1nˆE|X|≤c1ln1/γn→0, as n→∞. | (4.8) |
Thus, for ε>0 and all n large enough, we have
max1≤j≤n|j∑k=1ˆE(ankYnk)|≤bnε2. |
In order to prove (4.2), it suffices to show
I3=∞∑n=1n−1V(max1≤j≤n|j∑k=1ank(Ynk−ˆEYnk)|>bnε2)<∞. |
Noting that, for p≥1,
ˆE|(Ynk−ˆEYnk)|p≤cpˆE(|Ynk|p+|ˆEYnk|p)≤2cpˆE|Ynk|p, | (4.9) |
where cp is a positive constant depending only on p.
We know ank is non-negative. By Definition 2.5, for fixed n≥1, {ank(Ynk−ˆEYnk),1≤k≤n} is still negatively dependent sequence of random variables. By (4.9), Markov inequality and (2.3) for q=2, we can get
I3≤c∞∑n=1n−1b−2nˆE(max1≤j≤n|j∑k=1ank(Ynk−ˆEYnk)|2)≤c∞∑n=1n−1b−2n(n∑k=1ˆE|ank(Ynk−ˆEYnk)|2)+c∞∑n=1n−1b−2n(n∑k=1[(ˆE[ank(Ynk−ˆEYnk)])++(ˆε[ank(Ynk−ˆEYnk)])−])2≤c∞∑n=1n−1b−2nn∑k=1ˆE|ankYnk|2+c∞∑n=1n−1b−2n(n∑k=1[(ˆE[ank(Ynk−ˆEYnk)])++(ˆε[ank(Ynk−ˆEYnk)])−])2:=I4+I5. |
By (4.3) and Cr inequality, for any λ>0, we can obtain
|Ynk|λ≤|Xk|λI(|Xk|≤bn)+bλnI(|Xk|>bn)≤|Xk|λg(μ|Xk|bn)+bλn(1−g(|Xk|bn)). | (4.10) |
For I4, combining (2.4), (3.1), (3.2) and (4.10), we have
I4=c∞∑n=1n−1b−2nn∑k=1ˆE|ankYnk|2=c∞∑n=1n−1b−2nn∑k=1a2nkˆE|Ynk|2≤c∞∑n=1n2/α−1b−2n(ˆE|X|2g(μ|X|bn)+b2nˆE(1−g(|X|bn)))≤c∞∑n=1n2/α−1b−2n(ˆE|X|2g(μ|X|bn))+c∞∑n=1n2/α−1V(|X|>μbn)≤c∞∑n=1n2/α−1b−2n(ˆE|X|2g(μ|X|bn)). |
Let gj(x)∈Cl, Lip (R),j≥1, such that 0≤gj(x)≤1 for all x; gj(x/b2j)=1 if b2j−1<|x|≤b2j and gj(x/b2j)=0 if |x|≤μb2j−1 or |x|>(1+μ)b2j. Then for any r>0, we can obtain
I(b2j−1<|X|≤b2j)≤gj(|X|b2j)≤I(μb2j−1<|X|≤(1+μ)b2j), | (4.11) |
|X|rg(|X|b2k)≤1+k∑j=1|X|rgj(|X|b2j). | (4.12) |
It is noted that according to (2.5), (3.1), (4.11), (4.12), 0<γ<2 and g(x) is decreasing in x≥0. It is easy to prove that.
I4≤c∞∑n=1n2/α−1b−2nˆE(|X|2g(μ|X|bn))≤c∞∑k=1∑2k−1≤n<2k2k(2/α−1)b−22k−1ˆE(|X|2g(μ|X|b2k))≤c∞∑k=122k/αb−22kk∑j=1ˆE(|X|2gj(μ|X|b2j))=c∞∑j=1ˆE(|X|2gj(μ|X|b2j))∞∑k=j22k/αb−22k≤c∞∑j=1ˆE(|X|2gj(μ|X|b2j))∞∑k=jk−2/γ≤c∞∑j=1j1−2/γ⋅ˆE(|X|2gj(μ|X|b2j))≤∞∑j=1j1−2/γ⋅b22jV(|X|>b2j−1)=∞∑j=122j/αjV(|X|>cb2j)<∞. | (4.13) |
Next, we estimate I5<∞. According to (2.1), then we have ˆE(ankYnk−ˆE(ankYnk))=0. By Definition 2.1, we know ˆE(−X)=−ˆε(X). Combining (2.2) and Cr inequality, we can obtain
I5=c∞∑n=1n−1b−2n(n∑k=1(ˆε[ank(Ynk−ˆEYnk)])−)2=c∞∑n=1n−1b−2n(n∑k=1(−ˆE[−ank(Ynk−ˆEYnk)])−)2≤c∞∑n=1n−1b−2n(n∑k=1|−ˆE[−ankYnk+ˆEankYnk]|)2=c∞∑n=1n−1b−2n(n∑k=1|ˆE[−ankYnk]+ˆE[ankYnk]|)2≤c∞∑n=1n−1b−2n(n∑k=1(|ˆE[−ankYnk]|+|ˆE[ankYnk]|))2≤c∞∑n=1n−1b−2n(n∑k=1ank|ˆE[−Ynk]|)2+c∞∑n=1n−1b−2n(n∑k=1ank|ˆE[Ynk]|)2=c∞∑n=1n−1b−2n(n∑k=1ank|ˆE[−Xk]−ˆE[−Ynk]|)2+c∞∑n=1n−1b−2n(n∑k=1ank|ˆE[Xk]−ˆE[Ynk]|)2≤c∞∑n=1n−1b−2n(n∑k=1ankˆE|−Xk−(−Ynk)|)2+c∞∑n=1n−1b−2n(n∑k=1ankˆE|Xk−Ynk|)2≤c∞∑n=1n−1b−2n(n∑k=1ankˆE|Xk−Ynk|)2. |
Noting 2−1/α<μ<1, we can get μ>b2k−1/b2k. By (4.3), we have
1−g(|X|b2k)≤I(|X|b2k>μ)≤I(|X|>b2k−1)=∞∑j=kI(b2j−1<|X|≤b2j)≤∞∑j=kgj(|X|b2j). | (4.14) |
For I5, we contact (2.5), (3.1), (3.2), (4.11), (4.14) and ˆE is countably sub-additive. We can get
I5≤c∞∑n=1n−1b−2n(n∑k=1ankˆE|Xk−Ynk|)2≤c∞∑n=1n−1b−2nnmax(2,2/α)ˆE2(|X|(1−g(|X|bn)))≤c∞∑k=1∑2k−1≤n<2k2−kb−22k−12max(2,2/α)kˆE2(|X|(1−g(|X|b2k−1)))≤c∞∑k=1b−22k−12max(2,2/α)k(∞∑j=kˆE|X|gj(|X|b2j))2≤c∞∑k=12max(2,2/α)kk2/γ−1b−32k⋅b2kk1−2/γ∞∑j=kb2jV(|X|>cb2j)⋅∞∑j=kb2jV(|X|>cb2j)≤c∞∑k=12max(2,2/α)kk2/γ−1b−32k∞∑j=kb22jj1−2/γV(|X|>cb2j)⋅∞∑j=kb2jV(|X|>cb2j)≤c∞∑k=12max(2,2/α)kk2/γ−1b−32k∞∑j=kb2jV(|X|>cb2j)=c∞∑j=1b2jV(|X|>cb2j)j∑k=12max(2,2/α)kk2/γ−1b−32k=c∞∑j=1b2jV(|X|>cb2j)j∑k=12max(2−3/α,−1/α)kk−1−1/γ≤c{∞∑j=1b2jV(|X|>cb2j) if 0<α≤3/2;∞∑j=1b2jV(|X|>cb2j)j∑k=12k(2−3/α)k−1−1/γ if 3/2<α≤2;≤c{∞∑j=1b2jV(|X|>cb2j) if 0<α≤3/2;∞∑j=12j(2−3/α)j−1−1/γb2jV(|X|>cb2j) if 3/2<α≤2;≤c{∞∑j=12j/αj1/γV(|X|>cb2j)<∞ if 0<α≤3/2.∞∑j=122j(1−1/α)j−1V(|X|>cb2j)<∞ if 3/2<α≤2. |
For ∀ε>0, we have by Theorem 3.1 that
∞∑n=1n−1CV{b−1nmax1≤j≤n|j∑k=1ankXk|−ε}θ+=∞∑n=1n−1∫∞0V(b−1nmax1≤j≤n|j∑k=1ankXk|−ε>t1/θ)dt≤∞∑n=1n−1V(max1≤j≤n|j∑k=1ankXk|>bnε)+∞∑n=1n−1∫∞1V(max1≤j≤n|j∑k=1ankXk|>bnt1/θ)dt≤c∞∑n=1n−1∫∞1V(max1≤j≤n|j∑k=1ankXk|>bnt1/θ)dt. |
In order to prove (3.5), for t≥1, denoted as 1≤k≤n, we obtain
Y′nk:=−bnt1/θI(Xk<−bnt1/θ)+XkI(|Xk|≤bnt1/θ)+bnt1/θI(Xk>bnt1/θ),Z′nk:=Xk−Y′nk=(Xk+bnt1/θ)I(Xk<−bnt1/θ)+(Xk−bnt1/θ)I(Xk>bnt1/θ). |
We can easily see that
∞∑n=1n−1∫∞1V(max1≤j≤n|j∑k=1ankXk|>bnt1/θ)dt≤∞∑n=1n−1n∑k=1∫∞1V(|Xk|>bnt1/θ)dt+∞∑n=1n−1∫∞1V(max1≤j≤n|j∑k=1ank(Y′nk−ˆEY′nk)|>bnt1/θ−max1≤j≤n|j∑k=1ˆE(ankY′nk)|)dt:=H1+H2. |
In order to prove (3.5), we just need to prove
H1<∞, | (4.15) |
H2<∞. | (4.16) |
First of all, we prove (4.15). For t1/θ≥1, since g(x) is decreasing in x≥0, we have 1−g(|X|bnt1/θ)≤1−g(|X|bn). According to (2.5), (3.1), (4.11), (4.14), 0<θ<2 and ˆE is countably sub-additive. We can obtain
H1≤∞∑n=1n−1∫∞1n∑k=1ˆE(1−g(|Xk|bnt1/θ))dt≤∞∑n=1∫∞1ˆE|X|2ln1−2/γ−1/2|X|(μbnt1/θ)2ln1−2/γ−1/2(μbnt1/θ)(1−g(|X|bnt1/θ))dt≤c∞∑n=1∫∞1(μbnt1/θ)−2ln2/γ−1/2(μbnt1/θ)ˆE(|X|2ln1/2−2/γ|X|)(1−g(|X|bn))dt=c∞∑n=1ˆE(|X|2ln1/2−2/γ|X|)(1−g(|X|bn))∫∞μbny−2ln2/γ−1/2y⋅b−θnμ−θθyθ−1dy(let y=μbnt1/θ)≤c∞∑n=1ˆE(|X|2ln1/2−2/γ|X|)(1−g(|X|bn))b−θnbθ−2nln2/γ−1/2(μbn)≤c∞∑k=1∑2k−1≤n<2kb−22k−1ln2/γ−1/2(μb2k−1)ˆE(|X|2ln1/2−2/γ|X|)(1−g(|X|b2k−1))≤c∞∑k=12k(1−2/α)k−1/2∞∑j=kˆE(|X|2ln1/2−2/γ|X|)gj(|X|b2j)=c∞∑j=1ˆE(|X|2ln1/2−2/γ|X|)gj(|X|b2j)j∑k=12k(1−2/α)k−1/2≤c{∞∑j=1b22jj1/2−2/γV(|X|>cb2j) if 0<α<2;∞∑j=1j1/2b22jj1/2−2/γV(|X|>cb2j) if α=2;≤c{∞∑j=122j/αj−1/2V(|X|>cb2j)<∞ if 0<α<2.∞∑j=12jjV(|X|>cb2j)<∞ if α=2. |
hence, the proof of (4.15) is established.
Next, we prove (4.16). Let's prove that
supt≥1b−1nt−1/θmax1≤j≤n|j∑k=1ˆE(ankY′nk)|→0, as n→∞. |
When 1≤α≤2, similar considerations to (4.7), we have
supt≥1b−1nt−1/θmax1≤j≤n|j∑k=1ˆE(ankY′nk)|≤supt≥1b−1nt−1/θn∑k=1|ˆE(ankY′nk)|≤supt≥1b−1nt−1/θn∑k=1|ˆE(ankXk)−ˆE(ankY′nk)|≤supt≥1b−1nt−1/θn∑k=1ankˆE|Z′nk|≤supt≥1b−1nt−1/θn∑k=1ankˆE(|X|(1−g(|X|bnt1/θ)))≤supt≥1b−1nt−1/θnˆE(|X|(1−g(|X|bn)))≤nb−1nˆE(|X|(1−g(|X|bn)))≤nb−1n(μbnln1−2/γ(μbn))−1ˆE(|X|2ln1−2/γ|X|)≤c1n2/α−1lnn→0, as n→∞. |
When 0<α<1, noting |Y′nk|≤|Xk|, similar to (4.8), we can get
supt≥1b−1nt−1/θmax1≤j≤n|j∑k=1ˆE(ankY′nk)|≤supt≥1b−1nt−1/θn∑k=1|ˆE(ankY′nk)|≤supt≥1b−1nt−1/θn∑k=1ankˆE|Y′nk|≤supt≥1b−1nt−1/θn∑k=1ankˆE|X|≤b−1nn∑k=1ankˆE|X|≤c1ln1/γn→0, as n→∞. |
Hence, for t1/θ≥1 and all n large enough, we can get
max1≤j≤n|j∑k=1ˆE(ankY′nk)|≤bnt1/θ2. |
In order to prove (4.14), it suffices to show
H3=∞∑n=1n−1∫∞1V(max1≤j≤n|j∑k=1ank(Y′nk−ˆEY′nk)|>bnt1/θ2)dt<∞. |
We know ank is non-negative. By Definition 2.5, for fixed n≥1, {ank(Y′nk−ˆEY′nk),1≤k≤n} is still a negatively dependent sequence of random variables. By (4.9), Markov inequality and (2.3) for q=2, we can get
H3=∞∑n=1n−1∫∞1V(max1≤j≤n|j∑k=1ank(Y′nk−ˆEY′nk)|>bnt1/θ2)dt≤c∞∑n=1n−1∫∞1(bnt1/θ)−2ˆE(max1≤j≤n|j∑k=1ank(Y′nk−ˆEY′nk)|2)dt≤c∞∑n=1n−1∫∞1(bnt1/θ)−2n∑k=1ˆE|ank(Y′nk−ˆEY′nk)|2dt+c∞∑n=1n−1∫∞1(bnt1/θ)−2(n∑k=1[(ˆE[ank(Y′nk−ˆEY′nk)])++(ˆε[ank(Y′nk−ˆEY′nk)])−])2dt≤c∞∑n=1n−1∫∞1(bnt1/θ)−2n∑k=1ˆE|ankY′nk|2dt+c∞∑n=1n−1∫∞1(bnt1/θ)−2(n∑k=1[(ˆE[ank(Y′nk−ˆEY′nk)])++(ˆε[ank(Y′nk−ˆEY′nk)])−])2dt:=H4+H5. |
For H4, similar considerations to (4.10), we can get for any λ>0,
|Y′nk|λ≤|Xk|λg(μ|Xk|bnt1/θ)+(bnt1/θ)λ(1−g(|Xk|bnt1/θ)). | (4.17) |
Combining (3.2) and (4.17), we have
H4=∞∑n=1n−1∫∞1(bnt1/θ)−2n∑k=1ˆE|ankY′nk|2dt=∞∑n=1n−1∫∞1(bnt1/θ)−2n∑k=1a2nkˆE|Y′nk|2dt≤∞∑n=1n−1∫∞1(bnt1/θ)−2n∑k=1a2nk(ˆE|X|2g(μ|X|bnt1/θ)+(bnt1/θ)2ˆE(1−g(|X|bnt1/θ)))dt≤∞∑n=1n2/α−1b−2n∫∞1t−2/θˆE|X|2g(μ|X|bnt1/θ)dt+c∞∑n=1n2/α−1∫∞1ˆE(1−g(|X|bnt1/θ))dt:=H41+H42. |
For H42, combining (2.5), (3.1), (4.11), (4.14), 0<γ<2, 0<θ<2 and ˆE to countable sub-additivity and g(x) is decreasing in x≥0. We have
H42=∞∑n=1n2/α−1∫∞1ˆE(1−g(|X|bnt1/θ))dt≤∞∑n=1n2/α−1∫∞1ˆE|X|2ln1−2/γ−1/2|X|(μbnt1/θ)2ln1−2/γ−1/2(μbnt1/θ)(1−g(|X|bnt1/θ))dt≤∞∑n=1n2/α−1∫∞1(μbnt1/θ)−2ln2/γ−1/2(μbnt1/θ)ˆE(|X|2ln1/2−2/γ|X|)(1−g(|X|bn))dt=∞∑n=1n2/α−1ˆE(|X|2ln1/2−2/γ|X|)(1−g(|X|bn))∫∞μbny−2ln2/γ−1/2y⋅b−θnμ−θθyθ−1dy≤c∞∑n=1n2/α−1ˆE(|X|2ln1/2−2/γ|X|)(1−g(|X|bn))b−θnbθ−2nln2/γ−1/2(μbn)≤c∞∑k=1∑2k−1≤n<2k2k(2/α−1)b−22k−1ln2/γ−1/2(μb2k−1)ˆE(|X|2ln1/2−2/γ|X|)(1−g(|X|b2k−1))≤c∞∑k=122k/αb−22kk2/γ−1/2∞∑j=kˆE(|X|2ln1/2−2/γ|X|)gj(|X|b2j)≤c∞∑j=1ˆE(|X|2ln1/2−2/γ|X|)gj(|X|b2j)j∑k=1k−1/2≤c∞∑j=1j1/2b22jj1/2−2/γV(|X|>cb2j)=c∞∑j=122j/αjV(|X|>cb2j)<∞. | (4.18) |
Next, we prove H41<∞. Similar to the proof of (4.13) and (4.18), we can get
H41≤∞∑n=1n2/α−1b−2n∫∞1t−2/θ(ˆE|X|2(g(μ|X|bnt1/θ)−g(μ|X|bn)))dt+∞∑n=1n2/α−1b−2n∫∞1t−2/θ(ˆE|X|2g(μ|X|bn))dt≤∞∑n=1n2/α−1∫∞1b−2nt−2/θln2/γ−1/2bnˆE(|X|2ln1/2−2/γ|X|(1−g(μ|X|bn)))dt+c∞∑n=1n2/α−1b−2nˆE(|X|2g(μ|X|bn))<∞. |
Next, we estimate H5<∞. Similar to the proof of I5, we have
H5=c∞∑n=1n−1∫∞1(bnt1/θ)−2(n∑k=1(ˆε[ank(Y′nk−ˆEY′nk)])−)2dt≤c∞∑n=1n−1∫∞1(bnt1/θ)−2(n∑k=1|ank|ˆE|Xk−Y′nk|)2dt≤c∞∑n=1n−1b−2n∫∞1t−2/θ(n∑k=1|ank|ˆE(|X|(1−g(|X|bnt1/θ))))2dt≤c∞∑n=1n−1b−2n(n∑k=1|ank|ˆE(|X|(1−g(|X|bn))))2<∞. |
Hence, the proof of Theorem 3.2 is established.
This paper examines the concepts of complete convergence and complete integral convergence within sub-linear expectation space. The proof methodology employed differs from that utilized in probability space, as V and ˆE are not countably sub-additive in sub-linear expectation space. Additionally, the definition of identical distribution in sub-linear expectation is based on ˆE rather than V.
Therefore, the use of suitable auxiliary tools is crucial for conducting a thorough investigation in the sub-linear expectation space. This study primarily relies on Zhang's [5] upper expectation inequality, which serves as a useful tool in our proof. Our findings indicate that the convergence integral convergence of maxima is more comprehensive than previous research results. In upcoming research endeavors, we aim to explore more intriguing outcomes.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
This paper was supported by the National Natural Science Foundation of China (12061028) and Guangxi Colleges and Universities Key Laboratory of Applied Statistics.
In this article, all authors disclaim any conflict of interest.
[1] | S. G. Peng, G-expectation, G-Brownian motion and related stochastic calculus of Ito type, In: Stochastic analysis and applications, Berlin, Heidelberg: Springer, 2006,541–567. http://doi.org/10.1007/978-3-540-70847-6_25 |
[2] |
S. Peng, Multi-dimensional G-brownian motion and related stochastic calculus under gexpectation, Stoch. Proc. Appl., 118 (2008), 2223–2253. https://doi.org/10.1016/j.spa.2007.10.015 doi: 10.1016/j.spa.2007.10.015
![]() |
[3] |
S. G. Peng, Survey on normal distributions, central limit theorem, Brownian motion and the related stochastic calculus under sublinear expectations, Sci. China Ser. A-Math., 52 (2009), 1391–1411. https://doi.org/10.1007/s11425-009-0121-8 doi: 10.1007/s11425-009-0121-8
![]() |
[4] |
L. X. Zhang, Strong limit theorems for extended independent random variables and extended negatively dependent random variables under sub-linear expectations, Acta. Math. Sci., 42 (2016), 467–490. https://doi.org/10.1007/s10473-022-0203-z doi: 10.1007/s10473-022-0203-z
![]() |
[5] |
L. X. Zhang, Rosenthal's inequalities for independent and negatively dependent random variables under sub-linear expectations with applications, Sci. China Math., 59 (2016), 751–768. https://doi.org/10.1007/s11425-015-5105-2 doi: 10.1007/s11425-015-5105-2
![]() |
[6] |
L. X. Zhang, Exponential inequalities under the sub-linear expectations with applications to laws of the iterated logarithm, Sci. China Math., 59 (2016), 2503–2526. https://doi.org/10.1007/s11425-016-0079-1 doi: 10.1007/s11425-016-0079-1
![]() |
[7] |
P. L. Hsu, H. Robbins, Complete convergence and the law of large numbers, PNAS, 33 (1947), 25–31. https://doi.org/10.1073/pnas.33.2.25 doi: 10.1073/pnas.33.2.25
![]() |
[8] | Y. S. Chow, On the rate of moment convergence of sample sums and extremes, Bull. Inst. Math. Acad. Sin., 16 (1988), 177–201. |
[9] |
D. Qiu, P. Chen, Complete moment convergence for i.i.d.random variables, Stat. Probabil. Lett., 91 (2014), 76–82. https://doi.org/10.1016/j.spl.2014.04.001 doi: 10.1016/j.spl.2014.04.001
![]() |
[10] |
W. Yang, S. Hu, Complete moment convergence of pairwise NQD random variables, Stochastics, 87 (2015), 199–208. http://doi.org/10.1080/17442508.2014.939975 doi: 10.1080/17442508.2014.939975
![]() |
[11] |
M. Song, Q. Zhu, Complete moment convergence of extended negatively dependent random variables, J. Inequal. Appl., 2020 (2020), 150. https://doi.org/10.1186/s13660-020-02416-7 doi: 10.1186/s13660-020-02416-7
![]() |
[12] |
S. Li, Q. Wu, Complete integration convergence for arrays of rowwise extended negatively dependent random variables under the sub-linear expectations, AIMS Mathematics, 6 (2021), 12166–12181. https://doi.org/10.3934/math.2021706 doi: 10.3934/math.2021706
![]() |
[13] |
D. Lu, Y. Meng, Complete and complete integral convergence for arrays of row wise widely negative dependent random variables under the sub-linear expectations, Commun. Stat.-Theory. M., 51 (2022), 2994–3007. https://doi.org/10.1080/03610926.2020.1786585 doi: 10.1080/03610926.2020.1786585
![]() |
[14] |
X. Chen, Q. Wu. Complete convergence and complete integral convergence of partial sums for moving average process under sub-linear expectations, AIMS Mathematics, 7 (2022), 9694–9715. https://doi.org/10.3934/math.2022540 doi: 10.3934/math.2022540
![]() |
[15] |
F. X. Feng, X. Zeng, A complete convergence theorem of the maximum of partial sums under the sub-linear expectations, Filomat, 36 (2022), 5725–5735. https://doi.org/10.2298/FIL2217725F doi: 10.2298/FIL2217725F
![]() |
[16] |
M. Xu, K. Cheng, W. Yu, Complete convergence for weighted sums of negatively dependent random variables under sub-linear expectations, AMIS Mathematics, 7 (2022), 19998–20019. https://doi.org/10.3934/math.20221094 doi: 10.3934/math.20221094
![]() |
[17] |
M. Xu, K. Chng, Note on complete convergence and complete moment convergence for negatively dependent random variables under sub-linear expectations, AMIS Mathematics, 8 (2023), 8504–8521. https://doi.org/10.3934/math.2023428 doi: 10.3934/math.2023428
![]() |
[18] |
Y. Wu, Y. Wang, On the complete moment convergence for weighted sums of weakly dependent random variables, J. Math. Inequal., 15 (2021), 277–291. https://doi.org/10.7153/jmi-2021-15-21 doi: 10.7153/jmi-2021-15-21
![]() |