In this article, we study complete convergence and complete moment convergence for negatively dependent random variables under sub-linear expectations. The results obtained in sub-linear expectation spaces extend the corresponding ones in probability space.
Citation: Mingzhou Xu, Kun Cheng, Wangke Yu. Complete convergence for weighted sums of negatively dependent random variables under sub-linear expectations[J]. AIMS Mathematics, 2022, 7(11): 19998-20019. doi: 10.3934/math.20221094
[1] | Lunyi Liu, Qunying Wu . Complete integral convergence for weighted sums of negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(9): 22319-22337. doi: 10.3934/math.20231138 |
[2] | Mingzhou Xu, Xuhang Kong . Note on complete convergence and complete moment convergence for negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(4): 8504-8521. doi: 10.3934/math.2023428 |
[3] | Mingzhou Xu . Complete convergence and complete moment convergence for maximal weighted sums of extended negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(8): 19442-19460. doi: 10.3934/math.2023992 |
[4] | Shuyan Li, Qunying Wu . Complete integration convergence for arrays of rowwise extended negatively dependent random variables under the sub-linear expectations. AIMS Mathematics, 2021, 6(11): 12166-12181. doi: 10.3934/math.2021706 |
[5] | He Dong, Xili Tan, Yong Zhang . Complete convergence and complete integration convergence for weighted sums of arrays of rowwise $ m $-END under sub-linear expectations space. AIMS Mathematics, 2023, 8(3): 6705-6724. doi: 10.3934/math.2023340 |
[6] | Mingzhou Xu . On the complete moment convergence of moving average processes generated by negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2024, 9(2): 3369-3385. doi: 10.3934/math.2024165 |
[7] | Chengcheng Jia, Qunying Wu . Complete convergence and complete integral convergence for weighted sums of widely acceptable random variables under the sub-linear expectations. AIMS Mathematics, 2022, 7(5): 8430-8448. doi: 10.3934/math.2022470 |
[8] | Mingzhou Xu . Complete convergence of moving average processes produced by negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(7): 17067-17080. doi: 10.3934/math.2023871 |
[9] | Xiaocong Chen, Qunying Wu . Complete convergence and complete integral convergence of partial sums for moving average process under sub-linear expectations. AIMS Mathematics, 2022, 7(6): 9694-9715. doi: 10.3934/math.2022540 |
[10] | Baozhen Wang, Qunying Wu . Almost sure convergence for a class of dependent random variables under sub-linear expectations. AIMS Mathematics, 2024, 9(7): 17259-17275. doi: 10.3934/math.2024838 |
In this article, we study complete convergence and complete moment convergence for negatively dependent random variables under sub-linear expectations. The results obtained in sub-linear expectation spaces extend the corresponding ones in probability space.
Peng [1,2] firstly introduced the important concepts of the sub-linear expectations space to study the uncertainty in probability. Inspired by the important works of Peng [1,2], many scholars try to investigate the results under sub-linear expectations space, extending the corresponding ones in classic probability space. Zhang [3,4,5] established Donsker's invariance principle, exponential inequalities and Rosenthal's inequality under sub-linear expectations. Wu [6] obtained precise asymptotics for complete integral convergence under sub-linear expectations. Under sub-linear expectations, Xu and Cheng [7] investigated how small the increments of G-Brownian motion are. For more limit theorems under sub-linear expectations, the interested readers could refer to Xu and Zhang [8,9], Wu and Jiang [10], Zhang and Lin [11], Zhong and Wu [12], Hu and Yang [13], Chen [14], Chen and Wu [15], Zhang [16], Hu, Chen and Zhang [17], Gao and Xu [18], Kuczmaszewska [19], Xu and Cheng [7,20,21,22,23] and references therein.
In classic probability space, Hsu and Robbins [24] introduced concept of complete convergence, Chow [25] investigated complete moment convergence for independent random variables, Zhang and Ding [26] proved the complete moment convergence of the partial sums of moving average processes under some proper assumptions, Meng et al. [27] established complete convergence and complete moment convergence for weighted sums of extended negatively dependent random variables. For references on complete moment convergence in linear expectation space, the interested reader could refer to Ko [28], Meng et al. [29], Hosseini and Nezakati [30] and references therein. Encouraged by the work of Meng et al. [27], since the fact that X is independent to Y under sub-linear expectations implies that X is negatively dependent to Y under sub-linear expectations, we try to study the complete convergence and complete moment convergence for weighted sums of identically distributed, negatively dependent random variables under sub-linear expectations, which extends the corresponding results in Meng et al. [27].
We organize the remainders of this paper as follows. We give necessary basic notions, concepts and relevant properties, and present necessary lemmas under sub-linear expectations in the next section. In Section 3, we give our main results, Theorems 3.1 and 3.2, the proofs of which are presented in Section 4.
As in Xu and Cheng [22], we use similar notations as in the work by Peng [2], Chen [14], Zhang [5]. Suppose that (Ω,F) is a given measurable space. Assume that H is a subset of all random variables on (Ω,F) such that IA∈H (cf. Chen [14]), where I(A) or IA represent the indicator function of A throughout this paper, A∈F, and X1,⋯,Xn∈H implies φ(X1,⋯,Xn)∈H for each φ∈Cl,Lip(Rn), where Cl,Lip(Rn) represents the linear space of (local lipschitz) function φ fulfilling
|φ(x)−φ(y)|≤C(1+|x|m+|y|m)(|x−y|),∀x,y∈Rn |
for some C>0, m∈N both depending on φ.
Definition 2.1. A sub-linear expectation E on H is a functional E:H↦ˉR:=[−∞,∞] fulfilling the following properties: for all X,Y∈H, we have
(a) Monotonicity: If X≥Y, then E[X]≥E[Y];
(b) Constant preserving: E[c]=c, ∀c∈R;
(c) Positive homogeneity: E[λX]=λE[X], ∀λ≥0;
(d) Sub-additivity: E[X+Y]≤E[X]+E[Y] whenever E[X]+E[Y] is not of the form ∞−∞ or −∞+∞.
Remark 2.1. In (c) of Definition 2.1, positive homogeneity could be understood by Theorem 1.2.1 of Peng [2], which says that a sub-linear expectation could be represented as a supremum of linear expectations. In Theorem 3.1, E[X]=E[−X]=0 could imply that E[αX]=αE[X] for all α∈R, but E[X]=E[−X]=0 could not imply that E[αXβ]=αE[Xβ] for all α∈R and β≠1. By Lemma 2.1, in order to justify E[X]=E[−X]=0 in Theorem 3.1, we should have E[Z+X]=E[Z−X], for all Z∈H.
A set function V:F↦[0,1] is named to be a capacity if
(a)V(∅)=0, V(Ω)=1;
(b)V(A)≤V(B), A⊂B, A,B∈F.
A capacity V is called sub-additive if V(A+B)≤V(A)+V(B), A,B∈F.
In this article, given a sub-linear expectation space (Ω,H,E), write V(A):=inf{E[ξ]:IA≤ξ,ξ∈H}=E[IA], ∀A∈F (see (2.3) and the definitions of V above (2.3) in Zhang [4]). V is a sub-additive capacity. Define
CV(X):=∫∞0V(X>x)dx+∫0−∞(V(X>x)−1)dx. |
Suppose that X=(X1,⋯,Xm), Xi∈H and Y=(Y1,⋯,Yn), Yi∈H are two random vectors on (Ω,H,E). Y is called to be negatively dependent to X, if for each function ψ1∈Cl,Lip(Rm), ψ2∈Cl,Lip(Rn), we have E[ψ1(X)ψ2(Y)]≤E[ψ1(X)]E[ψ2(Y)] whenever ψ1(X)≥0, E[ψ2(Y)]≥0, E[ψ1(X)ψ2(Y)]<∞, E[|ψ1(X)|]<∞, E[|ψ2(Y)|]<∞, and either ψ1 and ψ2 are coordinatewise nondecreasing or ψ1 and ψ2 are coordinatewise nonincreasing (see Definition 2.3 of Zhang [4], Definition 1.5 of Zhang [5], Definition 2.5 in Chen [14]). {Xn}∞n=1 is named a sequence of negatively dependent random variables, if Xn+1 is negatively dependent to (X1,⋯,Xn) for each n≥1.
Suppose that X1 and X2 are two n-dimensional random vectors defined, respectively, in sub-linear expectation spaces (Ω1,H1,E1) and (Ω2,H2,E2). They are named identically distributed if for every Borel-measurable function ψ such that ψ(X1)∈H1,ψ(X2)∈H2,
E1[ψ(X1)]=E2[ψ(X2)], |
whenever the sub-linear expectations are finite. {Xn}∞n=1 is named to be identically distributed if for each i≥1, Xi and X1 are identically distributed.
In this sequel we assume that E is countably sub-additive, i.e., E(X)≤∑∞n=1E(Xn), whenever X≤∑∞n=1Xn, X,Xn∈H, and X≥0, Xn≥0, n=1,2,…. Let C stand for a positive constant which may differ from place to place.
As discussed in Zhang [5], by the definition of negative dependence, if X1,X2,…,Xn are negatively dependent random variables and f1, f2,…,fn are all non increasing (or non decreasing) functions, then f1(X1), f2(X2),…,fn(Xn) are still negatively dependent random variables.
We cite the following lemmas under sub-linear expectations.
Lemma 2.1. (See Proposition 1.3.7 of Peng [2]) Under sub-linear expectation space (Ω,H,E), if X,Y∈H, E[Y]=E[−Y]=0, then E[X+αY]=E[X], for any α∈R.
Lemma 2.2. (See Lemma 4.5 (iii) of Zhang [4]) If E is countably sub-additive under sub-linear expectation space (Ω,H,E), then for X∈H,
E|X|≤CV(|X|). |
Lemma 2.3. (See Theorem 2.1 and its proof of Zhang [5]) Assume that p>1 and {Xn;n≥1} is a sequence of negatively dependent random varables under sub-linear expectation space (Ω,H,E). Then for each n≥1, there exists a positive constant C=C(p) depending on p such that for 1<p≤2,
E|n∑i=1Xi|p≤C[n∑i=1E|Xi|p+(n∑i=1[|E(−Xi)|+|E(Xi)|])p], | (2.1) |
and for p>2,
E|n∑i=1Xi|p≤C{n∑i=1E|Xi|p+(n∑i=1EX2i)p/2+(n∑i=1[|E(−Xi)|+|E(Xi)|])p}. | (2.2) |
Proof. For reader's convenience, here we give the detailed proof. We first prove (2.1). Set Tk=max{Xk,Xk+Xk−1,…,Xk+⋯+X1}, ˘Tn=max{|Xn|,|Xn+Xn−1|,…,|Xn+⋯+X1|}. Since T+k+Xk+1+…+Xn≤Tn, T+k≤2˘Tn. Substituting x=Xk and y=T+k−1, k=n,…,2 to the following elementary inequality
|x+y|p≤22−p|x|p+|y|p+px|y|p−1sgn(y),1<p≤2 |
results in
|Tn|p≤22−p|Xn|p+(T+n−1)p+pXn(T+n−1)p−1≤22−p|Xn|p+|Tn−1|p+pXn(T+n−1)p−1≤…≤22−pn∑i=1|Xi|p+pn∑i=2Xi(T+i−1)p−1, |
which by the definition of negative dependence and Hölder inequality under sub-linear expectations (see Proposition 1.4.2 of Peng [2]), implies that
E|Tn|p≤22−pE[∑ni=1|Xn|p]+p∑ni=2E[Xi(T+i−1)p−1]≤22−pE[∑ni=1|Xn|p]+p2p−1∑ni=2(E[Xi])+(E[˘Tpn])1−1/p. |
Similarly,
E|max{−Xn,−Xn−Xn−1,…,−Xn−⋯−X1}|p≤22−pE[∑ni=1|Xn|p]+p2p−1∑ni=2(−E[−Xi])−(E[˘Tpn])1−1/p. |
Therefore
E|˘Tpn|≤23−pE[n∑i=1|Xn|p]+p2pn∑i=1[(E[Xi])++(−E[−Xi])−](E[˘Tpn])1−1/p |
which implies that (2.1) holds.
Next, by (2.4) of Zhang [5] and its proof, we see that for p>2
E[max1≤k≤n|Sk|p]≤Cp{n∑i=1E[|Xn|p]+(n∑i=1E[|Xi|2])p/2+(n∑i=1[(−E[−Xi])−+(E[Xi])+])p}, | (2.3) |
which implies that (2.2) holds.
By Lemma 2.3 and the similar argument as in Theorem 2.3.1 in Stout [31], we could obtain the following lemma.
Lemma 2.4. Assume that q>1 and {Xn;n≥1} is a sequence of negatively dependent random varables under sub-linear expectation space (Ω,H,E). Then for each n≥1, there exists a positive constant C=C(q) depending only on q such that 1<q≤2,
E(max1≤j≤n|j∑i=1Xi|q)≤C(logn)q{n∑i=1E|Xi|q+(n∑i=1[|E(−Xi)|+|E(Xi)|])q}, | (2.4) |
and for q>2,
E(max1≤j≤n|j∑i=1Xi|q)≤C(logn)q{n∑i=1E|Xi|q+(n∑i=1EX2i)q/2+(n∑i=1[|E(−Xi)|+|E(Xi)|])q}. | (2.5) |
Proof. For reader's convenience, here we also give detailed proof. We only prove (2.4), since (2.5) is obvious from (2.3). We first prove (2.4) for n=2k, k being an any positive integer. To avoid confusing the main idea, we just give the proof for k=6. Let Xr,s=∑si=r+1Xi for 0≤r<s≤26. We consider the following collections of Xr,s:
{X0,64}{X0,32,X32,64}{X0,16,X16,32,X32,48,X48,64}{X0,8,…,X56,64}{X0,4,…,X60,64}{X0,2,…,X62,64}{X0,1,…,X63,64}. |
There are k+1=7 collections. We choose 1≤i≤26, and expand Si, by using the terms of this expansion from the collections above and using the minimal possible number of terms in the expansion. Clearly at most one term is needed from each collections. As an example,
X0,62=X0,32+X32,48+X48,56+X56,60+X60,62. |
Hence each expansion has at most k+1=7 terms in it. Denote the expansion of Si by
Si=h∑j=1Xij−1,ij,(h≤7). |
It follows from Hölder inequality that
|Si|q≤7q−1h∑j=1(|Xij−1,ij|)q. |
Now
h∑j=1(|Xij−1,ij|)q≤|X0,64|q+(|X0,32|q+|X32,64|q)+(|X0,16|q+|X16,32|q+|X32,48|q+|X48,64|q)+⋯+(|X0,1|q+|X1,2|q+⋯+|X63,64|q). |
Hence,
max1≤i≤26|Si|q≤7q−1[|X0,64|q+(|X0,32|q+|X32,64|q)+(|X0,16|q+|X16,32|q+|X32,48|q+|X48,64|q)+⋯+(|X0,1|q+|X1,2|q+⋯+|X63,64|q)]. |
There are k+1=7 parenthetical expressions inside square brackets. By the Cr inequality, we see that
m∑i=1|ξi|q≤(m∑i=1|ξi|)q,∀ξi∈R,m≥1, |
which implies
(32∑i=1[|E(−Xi)|+|E(Xi)|])q+(64∑i=33[|E(−Xi)|+|E(Xi)|])q≤(26∑i=1[|E(−Xi)|+|E(Xi)|])q,(16∑i=1[|E(−Xi)|+|E(Xi)|])q+⋯+(64∑i=49[|E(−Xi)|+|E(Xi)|])q≤(26∑i=1[|E(−Xi)|+|E(Xi)|])q,…[|E(−X1)|+|E(X1)|]q+⋯+[|E(−X64)|+|E(X64)|]q≤(26∑i=1[|E(−Xi)|+|E(Xi)|])q. |
By (2.1) and the above discussion,
E[max1≤i≤26|Si|q]≤7q−1⋅7Cq[26∑i=1E|Xi|q+(26∑i=1[|E(−Xi)|+|E(Xi)|])q]. |
Using an appropriate notion, the above discussion extended to any k≥1 implies
E[max1≤i≤2k|Si|q]≤(k+1)qCq[2k∑i=1E|Xi|q+(2k∑i=1[|E(−Xi)|+|E(Xi)|])q]. | (2.6) |
Given an n such that n≠2k for any k≥1, choose k satisfying 2k−1<n<2k and redefine Xi=0 if n<i≤2k. By (2.6), we see that
E[max1≤i≤n|Si|q]≤(k+1)qCq[n∑i=1E|Xi|q+(n∑i=1[|E(−Xi)|+|E(Xi)|])q]. |
Since 2k−1<n implies (k+1)q≤[log(4n)/log2]q, (2.4) follows.
Our main results are the following.
Theorem 3.1. Suppose α>12, αp>1 and {Xn;n≥1} is a sequence of negatively dependent random variables, identically distributed as X under sub-linear expectation space (Ω,H,E). Assume that E(X)=E(−X)=0 while p>1. Suppose that {ani;1≤i≤n,n≥1} is an array of real numbers being all nonnegative or all non-positive such that
n∑i=1|ani|p=O(nδ)for0<δ<1. | (3.1) |
Let CV(|X|p)<∞. Then for any ε>0,
∞∑n=1nαp−2V{max1≤j≤n|j∑i=1aniXi|>εnα}<∞. | (3.2) |
Theorem 3.2. Suppose p>1, α≥12, αp>1 and {Xn;n≥1} is a sequence of negatively dependent random variables, identically distributed as X under sub-linear expectation space (Ω,H,E). Assume that E(X)=E(−X)=0. Suppose that {ani;1≤i≤n,n≥1} is an array of real numbers being all nonnegative or all non-positive such that (3.1) holds. Let CV(|X|p)<∞. Then for any ε>0,
∞∑n=1nαp−2−αCV(max1≤j≤n|j∑i=1aniXi|−εnα)+<∞. | (3.3) |
Remark 3.1. Under the assumptions of Theorem 3.2, we see that for all ε>0,
∞>∞∑n=1nαp−2−αCV(max1≤j≤n|j∑i=1aniXi|−εnα)+=∞∑n=1nαp−2−α∫εnα0V(max1≤j≤n|j∑i=1aniXi|−εnα>t)dt+∞∑n=1nαp−2−α∫∞εnαV(max1≤j≤n|j∑i=1aniXi|−εnα>t)dt≥C∞∑n=1nαp−2V(max1≤j≤n|j∑i=1aniXi|>2εnα). | (3.4) |
By (3.4), we can conclude that the complete moment convergence implies the complete convergence.
Proof. For all 1≤i≤n, n≥1, write
Yni=−nαI(aniXi<−nα)+aniXiI(|aniXi|≤nα)+nαI(aniXi>nα),Tnj=j∑i=1(Yni−EYni),j=1,2,…,n. |
We easily observe that for all ε>0,
{max1≤j≤n|j∑i=1aniXi|>εnα}⊂{max1≤j≤n|anjXj|>nα}⋃{max1≤j≤n|j∑i=1Yni|>εnα}, | (4.1) |
which results in
V{max1≤j≤n|j∑i=1aniXi|>εnα}≤V(max1≤j≤n|anjXj|>nα)+V(max1≤j≤n|j∑i=1Yni|>εnα)≤n∑j=1V(|anjXj|>nα)+V(max1≤j≤n|Tnj|>εnα−max1≤j≤n|j∑i=1EYni|). | (4.2) |
Firstly, we will establish that
n−αmax1≤j≤n|j∑i=1EYni|→0, asn→∞. | (4.3) |
We study the following three cases.
(ⅰ) If 12<α≤1, then p>1. By EX=E(−X)=0,|E(X−Y)|≤E|X−Y|,CV(|X|p)<∞, Lemmas 1 and 2, we can see that
n−αmax1≤j≤n|j∑i=1EYni|≤n−αn∑i=1|EYni|≤n−αn∑i=1|E[Yni−aniXi]|≤n−αn∑i=1E|Yni−aniXi|≤Cn∑i=1E|aniX|pnαp≤Cn−αpn∑i=1|ani|pE|X|p≤Cnδ−αpCV(|X|p)→0, asn→∞. | (4.4) |
(ⅱ) If α>1, p<1, then by CV(|X|p)<∞, and Lemma 2.2, we see that
n−αmax1≤j≤n|j∑i=1EYni|≤n−αn∑i=1|EYni|≤n−αn∑i=1|EaniXiI(|aniXi|≤nα)|+Cn∑i=1V(|aniXi|>nα)≤n−αn∑i=1|EaniXI(|aniX|≤nα)|+Cn∑i=1V(|aniX|>nα)≤Cn∑i=1E(|aniX|p)nαp+n∑i=1E(|aniX|p)nαp≤Cn−αpn∑i=1|ani|pE|X|p≤Cnδ−αpCV(|X|p)→0, asn→∞. | (4.5) |
(ⅲ) If α>1, p≥1, then by E|X|≤(E|X|p)1/p≤(CV(|X|p))1/p<∞, Markov inequality under sub-linear expectations, Hölder inequality, we see that
n−αmax1≤j≤n|j∑i=1EYni|≤n−αn∑i=1|EYni|≤n−αn∑i=1E|aniXiI(|aniXi|≤nα)|+n∑i=1V(|aniXi|>nα)≤Cn−αn∑i=1|ani|+Cn−αn∑i=1|ani|≤Cn−α(n∑i=1|ani|p)1/pn1−1/p≤Cn1−α−(1−δ)/p→0, asn→∞. | (4.6) |
Combining (4.4)–(4.6) results in (4.3) immediately. Hence, for n sufficiently large,
V(max1≤j≤n|j∑i=1aniXi|>εnα)≤n∑j=1V(|anjXj|>nα)+V(max1≤j≤n|Tnj|>εnα2). | (4.7) |
To prove (3.2), we only need to establish that
I:=∞∑n=1nαp−2n∑i=1V(|aniXi|>nα)<∞ | (4.8) |
and
II:=∞∑n=1nαp−2V(max1≤j≤n|Tnj|>εnα2)<∞. | (4.9) |
For I, by Markov inequality under sub-linear expectations, and Lemma 2.2, we obtain
I=∞∑n=1nαp−2n∑i=1V(|aniX|>nα)≤C∞∑n=1n−2n∑i=1E|aniX|p≤C∞∑n=1nδ−2CV(|X|p)<∞. | (4.10) |
As pointed before Lemma 2.2, we see that {Yni−EYni;1≤i≤n,n≥1} is also a sequence of negatively dependent random variables. By Lemma 2.4, Markov inequality under sub-linear expectations, and the Cr inequality, we conclude that for q>2,
II≤C∞∑n=1nαp−2n−αqE(max1≤j≤n|j∑i=1(Yni−EYni)|q)≤C∞∑n=1nαp−2−αq(logn)q(n∑i=1E|Yni−EYni|q+(n∑i=1E|Yni−EYni|2)q/2+(n∑i=1[|E(−Yni)|+|E(Yni)|])q)≤C∞∑n=1nαp−2−αq(logn)qn∑i=1E|Yni|q+C∞∑n=1nαp−2−αq(logn)q(n∑i=1E|Yni|2)q/2+C∞∑n=1nαp−2−αq(logn)q(n∑i=1[|E(−Yni)|+|E(Yni)|])q=:II1+II2+II3. | (4.11) |
Taking q>max{2,p}, by the Cr inequality, Markov inequality under sub-linear expectations, and Lemma 2.2, we have
II1≤C∞∑n=1nαp−2−αq(logn)qn∑i=1[E|aniXi|qI(|aniXi|≤nα)+nαqV(|aniXi|>nα)]=C∞∑n=1nαp−2−αq(logn)qn∑i=1E|aniX|qI(|aniX|≤nα)+C∞∑n=1nαp−2(logn)qn∑i=1V(|aniX|>nα)≤C∞∑n=1nαp−2(logn)qn∑i=1E|aniX|qI(|aniX|≤nα)nαq+C∞∑n=1n−2(logn)qn∑i=1E|aniX|p≤C∞∑n=1nαp−2(logn)qn∑i=1E|aniX|pnαp+C∞∑n=1n−2(logn)qn∑i=1|ani|pCV(|X|p)≤C∞∑n=1n−2(logn)qn∑i=1|ani|pCV(|X|p)+C∞∑n=1nδ−2(logn)q≤C∞∑n=1nδ−2(logn)q<∞. | (4.12) |
For II2, we study the following cases.
(ⅰ) If p≥2, observe that ∑ni=1a2ni≤(∑ni=1|ani|p)2/pn1−2/p≤n1−2(1−δ)/p. Taking q>max{2,2p(αp−1)2αp−p+2(1−δ)}, by the Cr inequality, EX2≤(E(|X|p))1/p≤(CV(|X|p))1/p<∞, we see that
II2≤C∞∑n=1nαp−2−αq(logn)q(n∑i=1[E|aniXi|2I(|aniXi|≤nα)+n2αV(|aniXi|>nα)])q/2≤C∞∑n=1nαp−2−αq(logn)q(n∑i=1E|aniX|2I(|aniX|≤nα))q/2+C∞∑n=1nαp−2(logn)q(n∑i=1V(|aniX|>nα))q/2≤C∞∑n=1nαp−2−αq(logn)q(n∑i=1a2ni)q/2+C∞∑n=1nαp−2−αq(logn)q(n∑i=1a2ni)q/2≤C∞∑n=1nαp−2−αq(logn)q(n1−2(1−δ)/p)q/2≤C∞∑n=1nαp−2−αq+q2−(1−δ)qp(logn)q<∞. | (4.13) |
(ⅱ) If p<2, we take q>2(αp−1)/(αp−δ). By the Cr inequality, Markov inequality under sub-linear expectations, and Lemma 2.2, we see that
![]() |
(4.14) |
For II3, we study the following cases.
(ⅰ) If 12<α≤1, then p>1. Taking q>αp−1αp−δ, by E(X)=E(−X)=0, Lemmas 1 and 2, we see that
II3≤C∞∑n=1nαp−2−αq(logn)q[n∑i=1[|E[−Yni+aniXi]|+|E[Yni−aniXi]|]]q≤C∞∑n=1nαp−2−αq(logn)q[n∑i=1[E[|−Yni+aniXi|]+E[|Yni−aniXi|]]]q≤C∞∑n=1nαp−2−αq(logn)q[n∑i=1[E|aniX|pnα(p−1)+E|aniX|pnα(p−1)]]q≤C∞∑n=1nαp−2−αq(logn)qn−α(p−1)q+δq(CV(|X|p))q≤C∞∑n=1nαp−2−αpq+δq(logn)q<∞. | (4.15) |
(ⅱ) If α>1, p<1, taking q>αp−1αp−δ, by Lemma 2.2, we obtain
II3≤C∞∑n=1nαp−2−αq(logn)q[n∑i=1[E|aniX|I{|aniX|≤nα}+nαV(|aniX|>nα)]]q≤C∞∑n=1nαp−2(logn)q(n∑i=1E|aniX|I{|aniX|≤nα}nα)q+C∞∑n=1nαp−2(logn)q(n−αpn∑i=1E|aniX|p)q≤C∞∑n=1nαp−2(logn)q(n∑i=1E|aniX|pnαp)q≤C∞∑n=1nαp−2(logn)q(n−αpn∑i=1|ani|p)q(CV(|X|p))q≤C∞∑n=1nαp−2+q(δ−αp)(logn)q<∞. |
(ⅲ) If α>1, p>1, then E|X|≤(E|X|p)1/p≤(CV(|X|p))1/p<∞. We take q>(αp−1)pαp−p+(1−δ). Hence by Cr inequality, Markov inequality under sub-linear expectations, and Hölder inequality, we see that
II3≤C∞∑n=1nαp−2−αq(logn)q[n∑i=1[E|aniX|I{|aniX|≤nα}+nαV(|aniX|>nα)]]q≤C∞∑n=1nαp−2−αq(logn)q[n∑i=1[|ani|+E|aniX|]]q≤C∞∑n=1nαp−2−αq(logn)q[n∑i=1|ani|]q≤C∞∑n=1nαp−2−αq(logn)q[(n∑i=1|ani|p)1/pn1−1/p]q≤C∞∑n=1nαp−2−αq+q−(1−δ)qp(logn)q<∞. |
Hence, the proof of Theorem 3.1 is finished.
Proof. For all ε>0 and any t>0, we see that
∞∑n=1nαp−2−αCV(max1≤j≤n|j∑i=1aniXi|−εnα)+=∞∑n=1nαp−2−α∫∞0V(max1≤j≤n|j∑i=1aniXi|−εnα>t)dt=∞∑n=1nαp−2−α∫nα0V(max1≤j≤n|j∑i=1aniXi|>εnα+t)dt+∞∑n=1nαp−2−α∫∞nαV(max1≤j≤n|j∑i=1aniXi|>εnα+t)dt≤∞∑n=1nαp−2−α∫nα0V(max1≤j≤n|j∑i=1aniXi|>εnα)dt+∞∑n=1nαp−2−α∫∞nαV(max1≤j≤n|j∑i=1aniXi|>t)dt≤∞∑n=1nαp−2V(max1≤j≤n|j∑i=1aniXi|>εnα)+∞∑n=1nαp−2−α∫∞nαV(max1≤j≤n|j∑i=1aniXi|>t)dt=:III1+III2. | (4.16) |
By Theorem 3.1, we conclude that III1<∞. Therefore, it is enough to establish III2<∞. Without loss of restriction, assume that ani≥0. For all 1≤i≤n, n≥1, t≥nα, write
Y′ni=−tI(aniXi<−t)+aniXiI(|aniXi≤t|)+tI(aniXi>t),Zni=aniXi−Y′ni=(aniXi+t)I(aniXi<−t)+(aniXi−t)I(aniXi>t),T′nj=j∑i=1(Y′ni−EY′ni),j=1,2,…,n. |
We easily see that for all ε>0,
V(max1≤j≤n|j∑i=1aniXi|>t)≤n∑i=1V(|aniXi|>t)+V(max1≤j≤n|j∑i=1Y′ni|>t), | (4.17) |
which results in
III2:=∞∑n=1nαp−2−α∫∞nαV(max1≤j≤n|j∑i=1aniXi|>t)dt≤∞∑n=1nαp−2−αn∑i=1∫∞nαV(|aniXi|>t)dt+∞∑n=1nαp−2−α∫∞nαV(max1≤j≤n|j∑i=1Y′ni|>t)dt=:III21+III22. | (4.18) |
For III21, by p>1, and Lemma 2.2, we obtain
III21:=∞∑n=1nαp−2−αn∑i=1∫∞nαV(|aniXi|>t)dt=∞∑n=1nαp−2−αn∑i=1∫∞nαpV(|aniX|p>s)1ps1p−1ds≤C∞∑n=1nαp−2−αn∑i=1∫∞nαpV(|aniX|p>s)(nαp)1p−1ds≤C∞∑n=1n−2n∑i=1CV(|aniX|p)=C∞∑n=1n−2n∑i=1|ani|pCV(|X|p)≤C∞∑n=1nδ−2<∞. | (4.19) |
For III22, we firstly establish that
supt≥nα1tmax1≤j≤n|j∑i=1EY′ni|→0, asn→∞. | (4.20) |
For 1≤i≤n, n≥1 and p>1, by EXn=E(−Xn)=0 and Lemma 2.1, we see that EY′ni=E(−Zni). If aniXi>t, 0<Zni=aniXi−t<aniXi. If aniXi<−t, aniXi<Zni=aniXi+t≤0. Hence |Zni|≤|aniXi|I(|aniXi|>t). Then, by Lemma 2.2, we have
supt≥nα1tmax1≤j≤n|j∑i=1EY′ni|=supt≥nα1tmax1≤j≤n|j∑i=1E(−Zni)|≤Csupt≥nα1tn∑i=1E|Zni|≤Csupt≥nα1tn∑i=1E|aniXi|I(|aniXi|>t)≤Cn∑i=1E|aniX|I(|aniX|>nα)nα≤Cn∑i=1E|aniX|pnαp≤Cnδ−αpCV(|X|p)→0, asn→∞. | (4.21) |
Hence, while n is large enough, for t≥nα,
max1≤j≤n|j∑i=1EY′ni|≤t2, | (4.22) |
which results in
V(max1≤j≤n|j∑i=1Y′ni|>t)≤V(max1≤j≤n|T′nj|>t2). | (4.23) |
In the following, we present III22<∞, for 1<p≤2 and p>2.
(ⅰ) If 1<p≤2, by (4.23), Lemma 2.4, Markov inequality under sub-linear expectations, and the Cr inequality, we obtain
III22≤∞∑n=1nαp−2−α∫∞nαV(max1≤j≤n|T′nj|>t2)dt≤C∞∑n=1nαp−2−α∫∞nαt−2E(max1≤j≤n|T′nj|2)dt≤C∞∑n=1nαp−2−α∫∞nαt−2(logn)2(n∑i=1E|Y′ni−EY′ni|2+(n∑i=1[|E(−Y′ni)|+|E(Y′ni)|])2)dt≤C∞∑n=1nαp−2−α(logn)2∫∞nαt−2n∑i=1E|aniXi|2I(|aniXi|≤nα)dt+C∞∑n=1nαp−2−α(logn)2∫∞nαt−2n∑i=1E|aniXi|2I(nα<|aniXi|≤t)dt+C∞∑n=1nαp−2−α(logn)2n∑i=1∫∞nαV(|aniXi|>t)dt+C∞∑n=1nαp−2−α(logn)2∫∞nαt−2(n∑i=1[|E(−Y′ni)|+|E(Y′ni)|])2dt=:III221+III222+III223+III224. | (4.24) |
For III221, by 1<p≤2, we see that
III221≤C∞∑n=1nαp−2(logn)2n∑i=1E|aniXi|2I(|aniXi|≤nα)n2α=C∞∑n=1nαp−2(logn)2n∑i=1E|aniX|2I(|aniX|≤nα)n2α≤C∞∑n=1nαp−2(logn)2n∑i=1E|aniX|pI(|aniX|≤nα)nαp≤C∞∑n=1nαp−2(logn)2n−αp+δCV(|X|p)≤C∞∑n=1nδ−2(logn)2<∞. | (4.25) |
For III222, by 1<p≤2, by Markov inequality under sub-linear expectations, and Lemma 2.2, we see that
![]() |
(4.26) |
For III223<∞, by the proof of III21<∞, we can see that III223<∞. For III224, by 1<p≤2, E(−X)=E(X)=0, and Lemma 2.1, we obtain
III224≤C∞∑n=1nαp−2−α(logn)2n−α(n∑i=1[|E[−Y′ni+aniXi]|+|E[Y′ni−aniXi]|])2≤C∞∑n=1nαp−2−α(logn)2n−α(n∑i=1E|aniX|I{|aniX|>nα})2≤C∞∑n=1nαp−2−α(logn)2n−α(n∑i=1E|aniX|pnα(p−1))2≤C∞∑n=1n2δ−αp−2(logn)2(CV(|X|p))2<∞. |
Therefore, we conclude that III22<∞ for 1<p≤2.
(ⅱ) If p>2, by (4.23), E(X)=E(−X)=0, Markov inequality under sub-linear expectations, the Cr inequality, and Lemma 2.4 (for q>2), we see that
III22≤∞∑n=1nαp−2−α∫∞nαV(max1≤j≤n|T′nj|>t2)dt≤C∞∑n=1nαp−2−α∫∞nαt−qE(max1≤j≤n|j∑i=1(Y′ni−EY′ni)|q)dt≤C∞∑n=1nαp−2−α(logn)q∫∞nαt−q(n∑i=1E|Y′ni−EY′ni|q+(n∑i=1E(Y′ni−EY′ni)2)q/2+(n∑i=1[|E(Y′ni)|+|E(−Y′ni)|])q)dt≤C∞∑n=1nαp−2−α(logn)qn∑i=1∫∞nαt−qE|Y′ni|qdt+∞∑n=1nαp−2−α(logn)q∫∞nαt−q(n∑i=1EY′ni2)q/2dt+∞∑n=1nαp−2−α(logn)q∫∞nαt−q(n∑i=1[|E(Y′ni)|+|E(−Y′ni)|])qdt=:IV1+IV2+IV3. | (4.27) |
For IV1, we obtain
IV1=C∞∑n=1nαp−2−α(logn)qn∑i=1∫∞nαt−qE|aniX|qI(|aniX|≤nα)dt+C∞∑n=1nαp−2−α(logn)qn∑i=1∫∞nαt−qE|aniX|qI(nα<|aniX|≤t)dt+C∞∑n=1nαp−2−α(logn)qn∑i=1∫∞nαV(|aniX|>t)dt=:IV11+IV12+IV13. | (4.28) |
By the similar proofs of III221<∞ and III222<∞ (with q in place of the exponent 2), we can see that IV11<∞ and IV12<∞. Similarly, by the proof of III21<∞, we can see that IV13<∞.
For IV2, we obtain
IV2≤C∞∑n=1nαp−2−α(logn)q∫∞nαt−q(n∑i=1E|aniX|2I(|aniX|≤nα))q/2dt+C∞∑n=1nαp−2−α(logn)q∫∞nαt−q(n∑i=1E|aniX|2I(nα<|aniX|≤t))q/2dt+C∞∑n=1nαp−2−α(logn)q∫∞nα(n∑i=1V(|aniX|>t))q/2dt=:IV21+IV22+IV23. | (4.29) |
For IV21, taking q>max{2,2p(αp−1)2αp−p+2(1−δ)}, by the Cr inequality, Jensen inequality under sub-linear expectations, and Lemma 2.2, we obtain
IV21=C∞∑n=1nαp−2−α(logn)q∫∞nαt−q(n∑i=1E|aniX|2I(|aniX|≤nα))q/2dt≤C∞∑n=1nαp−2−α(logn)qnα−αq(n∑i=1E|aniX|2I(|aniX|≤nα))q/2≤C∞∑n=1nαp−2−α(logn)qnα−αq(n∑i=1a2ni)q/2(E|X|p)q/p≤C∞∑n=1nαp−2−α(logn)qnα−αq(n1−2(1−δ)/p)q/2(CV(|X|p))q/p≤C∞∑n=1nαp−2−αq+q2−(1−δ)qp(logn)q<∞. | (4.30) |
For IV22, taking q>max{2,2(αp−1)(αp−δ)}, by Jensen inequality under sub-linear expectations, and Lemma 2.2, we have
IV22=C∞∑n=1nαp−2−α(logn)q∫∞nαt−q(n∑i=1E|aniX|2I(nα<|aniX|≤t))q/2dt≤C∞∑n=1nαp−2−α(logn)q(n∑i=1E|aniX|2I(|aniX|>nα))q/2∫∞nαt−qdt≤C∞∑n=1nαp−2(logn)q(n∑i=1E|aniX|2I(|aniX|>nα)n2α)q/2≤C∞∑n=1nαp−2(logn)q(n∑i=1E|aniX|pI(|aniX|>nα)nαp)q/2≤C∞∑n=1nαp−2(logn)qn−αpq/2nδq/2(CV(|X|p))q/2≤C∞∑n=1nαp−2−αpq/2+δq/2(logn)q<∞. | (4.31) |
For IV23, by Markov inequality under sub-linear expectations, and Lemma 2.2, we conclude that
supt≥nαn∑i=1V(|aniX|>t)≤n∑i=1V(|aniX|>nα)≤n∑i=1E|aniX|pnαp≤Cnδ−αp→0, asn→∞. | (4.32) |
Since t≥nα, for all n sufficiently large, we deduce that
n∑i=1V(|aniX|>t)<1. | (4.33) |
By (4.29), we obtain
IV23=C∞∑n=1nαp−2−α(logn)q∫∞nα(n∑i=1V(|aniX|>t))q/2dt≤C∞∑n=1nαp−2−α(logn)q∫∞nα(n∑i=1V(|aniX|>t))dt≤C∞∑n=1nδ−2(logn)q<∞. | (4.34) |
For IV3, taking q>max{αp−1αp−δ,2}, by E(X)=E(−X)=0, Lemma 2.1, and Lemma 2.2, we see that
IV3≤C∞∑n=1nαp−2−α(logn)qn−(q−1)α(n∑i=1[|E(Y′ni−aniXi)|+|E(−Y′ni+aniXi)|])q≤C∞∑n=1nαp−2−α−(q−1)α(logn)q(n∑i=1E|aniXi|pnα(p−1))q≤C∞∑n=1nαp−2−α−(q−1)α(logn)qn−αq(p−1)+δq(CV(|X|p))q≤C∞∑n=1nαp−2−αqp+δq(logn)q<∞. |
Hence, the proof of Theorem 3.2 is finished.
We have established the new results of complete convergence and complete moment convergence for weighted sums of negatively dependent random variables under sub-linear expectations. Theorems of this article are the extensions of convergence properties for weighted sums of extended negatively dependent random variables under classical probability space.
This research was supported by Doctoral Scientific Research Starting Foundation of Jingdezhen Ceramic University (Nos.102/01003002031), Natural Science Foundation Program of Jiangxi Province 20202BABL211005, and National Natural Science Foundation of China (Nos. 61662037), Jiangxi Province Key S & T Cooperation Project (Nos. 20212BDH80021).
All authors declare no conflict of interest in this paper.
[1] | S. Peng, G-expectation, G-Brownian motion and related stochastic calculus of Itô type, In: Stochastic analysis and applications, Berlin: Springer, 2007,541–561. https://doi.org/10.1007/978-3-540-70847-6_25 |
[2] | S. Peng, Nonlinear expectations and stochastic calculus under uncertainty, Berlin: Springer, 2019. https://doi.org/10.1007/978-3-662-59903-7 |
[3] |
L. Zhang, Donsker's invariance principle under the sub-linear expectation with an application to Chung's law of the iterated logarithm, Commun. Math. Stat., 3 (2015), 187–214. https://doi.org/10.1007/s40304-015-0055-0 doi: 10.1007/s40304-015-0055-0
![]() |
[4] |
L. Zhang, Exponential inequalities under the sub-linear expectations with applications to laws of the iterated logarithm, Sci. China Math., 59 (2016), 2503–2526. https://doi.org/10.1007/s11425-016-0079-1 doi: 10.1007/s11425-016-0079-1
![]() |
[5] |
L. Zhang, Rosenthal's inequalities for independent and negatively dependent random variables under sub-linear expectations with applications, Sci. China Math., 59 (2016), 751–768. https://doi.org/10.1007/s11425-015-5105-2 doi: 10.1007/s11425-015-5105-2
![]() |
[6] |
Q. Wu, Precise asymptotics for complete integral convergence under sublinear expectations, Math. Probl. Eng., 2020 (2020), 3145935. https://doi.org/10.1155/2020/3145935 doi: 10.1155/2020/3145935
![]() |
[7] |
M. Xu, K. Cheng, How small are the increments of G-Brownian motion, Stat. Probabil. Lett., 186 (2022), 109464. https://doi.org/10.1016/j.spl.2022.109464 doi: 10.1016/j.spl.2022.109464
![]() |
[8] |
J. Xu, L. Zhang, Three series theorem for independent random variables under sub-linear expectations with applications, Acta Math. Appl. Sin. Engl. Ser., 35 (2019), 172–184. https://doi.org/10.1007/s10114-018-7508-9 doi: 10.1007/s10114-018-7508-9
![]() |
[9] |
J. Xu, L. Zhang, The law of logarithm for arrays of random variables under sub-linear expectations, Acta Math. Appl. Sin. Engl. Ser., 36 (2020), 670–688. https://doi.org/10.1007/s10255-020-0958-8 doi: 10.1007/s10255-020-0958-8
![]() |
[10] |
Q. Wu, Y. Jiang, Strong law of large numbers and Chover's law of the iterated logarithm under sub-linear expectations, J. Math. Anal. Appl., 460 (2018), 252–270. https://doi.org/10.1016/j.jmaa.2017.11.053 doi: 10.1016/j.jmaa.2017.11.053
![]() |
[11] |
L. Zhang, J. Lin, Marcinkiewicz's strong law of large numbers for nonlinear expectations, Stat. Probabil. Lett., 137 (2018), 269–276. https://doi.org/10.1016/j.spl.2018.01.022 doi: 10.1016/j.spl.2018.01.022
![]() |
[12] |
H. Zhong, Q. Wu, Complete convergence and complete moment convergence for weighted sums of extended negatively dependent random variables under sub-linear expectation, J. Inequal. Appl., 2017 (2017), 261. https://doi.org/10.1186/s13660-017-1538-1 doi: 10.1186/s13660-017-1538-1
![]() |
[13] |
Z. Hu, Y. Yang, Some inequalities and limit theorems under sublinear expectations, Acta Math. Appl. Sin. Engl. Ser., 33 (2017), 451–462. https://doi.org/10.1007/s10255-017-0673-2 doi: 10.1007/s10255-017-0673-2
![]() |
[14] |
Z. Chen, Strong laws of large numbers for sub-linear expectations, Sci. China Math., 59 (2016), 945–954. https://doi.org/10.1007/s11425-015-5095-0 doi: 10.1007/s11425-015-5095-0
![]() |
[15] |
X. Chen, Q. Wu, Complete convergence and complete integral convergence of partial sums for moving average process under sub-linear expectations, AIMS Mathematics, 7 (2022), 9694–9715. https://doi.org/10.3934/math.2022540 doi: 10.3934/math.2022540
![]() |
[16] | L. Zhang, Strong limit theorems for extended independent random variables and extended negatively dependent random variables under sub-linear expectations, Acta Math. Sci., 42 (2022), 467–490. https://doi.org/10.1007/s10473-022-0203-z |
[17] |
F. Hu, Z. Chen, D. Zhang, How big are the increments of G-Brownian motion, Sci. China Math., 57 (2014), 1687–1700. https://doi.org/10.1007/s11425-014-4816-0 doi: 10.1007/s11425-014-4816-0
![]() |
[18] |
F. Gao, M. Xu, Large deviations and moderate deviations for independent random variables under sublinear expectations, Sci. China Math., 41 (2011), 337–352. https://doi.org/10.1360/012009-879 doi: 10.1360/012009-879
![]() |
[19] |
A. Kuczmaszewska, Complete convergence for widely acceptable random variables under the sublinear expectations, J. Math. Anal. Appl., 484 (2020), 123662. https://doi.org/10.1016/j.jmaa.2019.123662 doi: 10.1016/j.jmaa.2019.123662
![]() |
[20] |
M. Xu, K. Cheng, Precise asymptotics in the law of the iterated logarithm under sublinear expectations, Math. Probl. Eng., 2021 (2021), 6691857. https://doi.org/10.1155/2021/6691857 doi: 10.1155/2021/6691857
![]() |
[21] |
M. Xu, K. Cheng, Equivalent conditions of complete th moment convergence for weighted sums of IID random variables under sublinear expectations, Discrete Dyn. Nat. Soc., 2021 (2021), 7471550. https://doi.org/10.1155/2021/7471550 doi: 10.1155/2021/7471550
![]() |
[22] |
M. Xu, K. Cheng, Convergence for sums of iid random variables under sublinear expectations, J. Inequal. Appl., 2021 (2021), 157. https://doi.org/10.1186/s13660-021-02692-x doi: 10.1186/s13660-021-02692-x
![]() |
[23] |
M. Xu, K. Cheng, Note on precise asymptotics in the law of the iterated logarithm under sublinear expectations, Math. Probl. Eng., 2022 (2022), 6058563. https://doi.org/10.1155/2022/6058563 doi: 10.1155/2022/6058563
![]() |
[24] |
P. Hsu, H. Robbins, Complete convergence and the law of large numbers, PNAS, 33 (1947), 25–31. https://doi.org/10.1073/pnas.33.2.25 doi: 10.1073/pnas.33.2.25
![]() |
[25] | Y. Chow, On the rate of moment convergence of sample sums and extremes, Bull. Inst. Math. Acad. Sin., 16 (1988), 177–201. |
[26] |
Y. Zhang, X. Ding, Further research on complete moment convergence for moving average process of a class of random variables, J. Inequal. Appl., 2017 (2017), 46. https://doi.org/10.1186/s13660-017-1322-2 doi: 10.1186/s13660-017-1322-2
![]() |
[27] |
B. Meng, D. Wang, Q. Wu, Complete convergence and complete moment convergence for weighted sums of extended negatively dependent random variables, Commun. Stat.-Theor. M., 51 (2022), 3847–3863. https://doi.org/10.1080/03610926.2020.1804587 doi: 10.1080/03610926.2020.1804587
![]() |
[28] |
M. Ko, Complete moment convergence of moving average process generated by a class of random variables, J. Inequal. Appl., 2015 (2015), 225. https://doi.org/10.1186/s13660-015-0745-x doi: 10.1186/s13660-015-0745-x
![]() |
[29] | B. Meng, D. Wang, Q. Wu, Convergence of asymptotically almost negatively associated random variables with random coefficients, Commun. Stat.-Theor. M., in press. https://doi.org/10.1080/03610926.2021.1963457 |
[30] |
S. Hosseini, A. Nezakati, Complete moment convergence for the dependent linear processes with random coefficients, Acta Math. Sin., Engl. Ser., 35 (2019), 1321–1333. https://doi.org/10.1007/s10114-019-8205-z doi: 10.1007/s10114-019-8205-z
![]() |
[31] | W. Stout, Almost sure convergence, New York: Academic Press, 1974. |
1. | Mingzhou Xu, Xuhang Kong, Note on complete convergence and complete moment convergence for negatively dependent random variables under sub-linear expectations, 2023, 8, 2473-6988, 8504, 10.3934/math.2023428 | |
2. | Mingzhou Xu, Kun Cheng, Wangke Yu, Convergence of linear processes generated by negatively dependent random variables under sub-linear expectations, 2023, 2023, 1029-242X, 10.1186/s13660-023-02990-6 | |
3. | Mingzhou Xu, Complete convergence of moving average processes produced by negatively dependent random variables under sub-linear expectations, 2023, 8, 2473-6988, 17067, 10.3934/math.2023871 | |
4. | Mingzhou Xu, Complete convergence and complete moment convergence for maximal weighted sums of extended negatively dependent random variables under sub-linear expectations, 2023, 8, 2473-6988, 19442, 10.3934/math.2023992 | |
5. | Lunyi Liu, Qunying Wu, Complete integral convergence for weighted sums of negatively dependent random variables under sub-linear expectations, 2023, 8, 2473-6988, 22319, 10.3934/math.20231138 | |
6. | Mingzhou Xu, On the complete moment convergence of moving average processes generated by negatively dependent random variables under sub-linear expectations, 2024, 9, 2473-6988, 3369, 10.3934/math.2024165 | |
7. | Mingzhou Xu, Xuhang Kong, Complete qth moment convergence of moving average processes for m-widely acceptable random variables under sub-linear expectations, 2024, 214, 01677152, 110203, 10.1016/j.spl.2024.110203 |