Loading [Contrib]/a11y/accessibility-menu.js
Research article

An extension on the rate of complete moment convergence for weighted sums of weakly dependent random variables

  • Received: 28 June 2022 Revised: 31 August 2022 Accepted: 02 September 2022 Published: 09 October 2022
  • MSC : 60F15

  • The authors study the convergence rate of complete moment convergence for weighted sums of weakly dependent random variables without assumptions of identical distribution. Under the moment condition of $ E{{{\left| X \right|}^{\alpha }}}/{{{\left(\log \left(1+\left| X \right| \right) \right)}^{\alpha /\gamma -1}}}\; < \infty $ for $ 0 < \gamma < \alpha $ with $ 1 < \alpha \le 2 $, we establish the complete $ \alpha $-th moment convergence theorem for weighted sums of weakly dependent cases, which improves and extends the related known results in the literature.

    Citation: Haiwu Huang, Yuan Yuan, Hongguo Zeng. An extension on the rate of complete moment convergence for weighted sums of weakly dependent random variables[J]. AIMS Mathematics, 2023, 8(1): 622-632. doi: 10.3934/math.2023029

    Related Papers:

    [1] Mingzhou Xu . Complete convergence and complete moment convergence for maximal weighted sums of extended negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(8): 19442-19460. doi: 10.3934/math.2023992
    [2] He Dong, Xili Tan, Yong Zhang . Complete convergence and complete integration convergence for weighted sums of arrays of rowwise $ m $-END under sub-linear expectations space. AIMS Mathematics, 2023, 8(3): 6705-6724. doi: 10.3934/math.2023340
    [3] Lunyi Liu, Qunying Wu . Complete integral convergence for weighted sums of negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(9): 22319-22337. doi: 10.3934/math.20231138
    [4] Shuyan Li, Qunying Wu . Complete integration convergence for arrays of rowwise extended negatively dependent random variables under the sub-linear expectations. AIMS Mathematics, 2021, 6(11): 12166-12181. doi: 10.3934/math.2021706
    [5] Mingzhou Xu, Xuhang Kong . Note on complete convergence and complete moment convergence for negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(4): 8504-8521. doi: 10.3934/math.2023428
    [6] Chengcheng Jia, Qunying Wu . Complete convergence and complete integral convergence for weighted sums of widely acceptable random variables under the sub-linear expectations. AIMS Mathematics, 2022, 7(5): 8430-8448. doi: 10.3934/math.2022470
    [7] Yi Wu, Duoduo Zhao . Law of the single logarithm for randomly weighted sums of dependent sequences and an application. AIMS Mathematics, 2024, 9(4): 10141-10156. doi: 10.3934/math.2024496
    [8] Mingzhou Xu, Kun Cheng, Wangke Yu . Complete convergence for weighted sums of negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2022, 7(11): 19998-20019. doi: 10.3934/math.20221094
    [9] Yongfeng Wu . Limit theorems for negatively superadditive-dependent random variables with infinite or finite means. AIMS Mathematics, 2023, 8(11): 25311-25324. doi: 10.3934/math.20231291
    [10] Mingzhou Xu . On the complete moment convergence of moving average processes generated by negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2024, 9(2): 3369-3385. doi: 10.3934/math.2024165
  • The authors study the convergence rate of complete moment convergence for weighted sums of weakly dependent random variables without assumptions of identical distribution. Under the moment condition of $ E{{{\left| X \right|}^{\alpha }}}/{{{\left(\log \left(1+\left| X \right| \right) \right)}^{\alpha /\gamma -1}}}\; < \infty $ for $ 0 < \gamma < \alpha $ with $ 1 < \alpha \le 2 $, we establish the complete $ \alpha $-th moment convergence theorem for weighted sums of weakly dependent cases, which improves and extends the related known results in the literature.



    Existing methods and algorithms appeared in some literatures assume that variables are independent, but it is not plausible. In many stochastic models and statistical applications, those variables involved are dependent. Hence, it is important and meaningful to extend the results of independent variables to dependent cases. One of these dependence structures is weakly dependent (i.e., $ {{\rho }^{*}} $-mixing or $ \tilde{\rho} $-mixing), which has attracted the concern by many researchers.

    Definition 1.1. Let $ \left\{ {{X}_{n}}; n\ge 1 \right\} $ be a sequence of random variables defined on a probability space $ \left(\Omega, \mathcal{F}, P \right) $. For any $ S\subset \text{N = }\left\{ 1, 2, \ldots \right\} $, define $ {{\mathcal{F}}_{S}} = \sigma \left({{X}_{i}}, i\in S \right) $. The set $ {{L}_{2}}\left({{\mathcal{F}}_{S}} \right) $ is the class of all $ \mathcal{F} $-measureable random variables with the finite second moment. For some integer $ s\ge 1 $, denote the mixing coefficient by

    $ \begin{equation} {{\rho }^{*}}\left( s \right) = \sup \left\{ \rho \left( {{\mathcal{F}}_{S}}, {{\mathcal{F}}_{T}} \right):S, T\subset \text{N}, \text{dist}\left( S, T \right)\ge s \right\}, \end{equation} $ (1.1)

    where

    $ \begin{equation} \rho \left( {{\mathcal{F}}_{S}}, {{\mathcal{F}}_{T}} \right) = \sup \left\{ \frac{\left| EXY-EXEY \right|}{\sqrt{\operatorname{Var}X}\cdot \sqrt{\operatorname{Var}Y}}:X\in {{L}_{2}}\left( {{\mathcal{F}}_{S}} \right), Y\in {{L}_{2}}\left( {{\mathcal{F}}_{T}} \right) \right\}. \end{equation} $ (1.2)

    Noting that the above fact $ \text{dist}\left(S, T \right)\ge s $ denotes $ \text{dist}\left(S, T \right) = \inf \left\{ \left| i-j \right|:i\in S, j\in T \right\}\ge s $. Obviously, $ 0\le {{\rho }^{*}}\left(s+1 \right)\le {{\rho }^{*}}\left(s \right)\le 1 $ and $ {{\rho }^{*}}\left(0 \right) = 1 $. The sequence $ \left\{ {{X}_{n}}; n\ge 1 \right\} $ is called $ {{\rho }^{*}} $-mixing if there exists $ s\in \text{N} $ such that $ {{\rho }^{*}}\left(s \right) < 1 $. Clearly, if $ \left\{ {{X}_{n}}; n\ge 1 \right\} $ is a sequence of independent random variables, then $ {{\rho }^{*}}\left(s \right) = 0 $ for all $ s\ge 1 $.

    $ {{\rho }^{*}} $-mixing seems similarly to another dependent structure: $ \rho $-mixing, but they are quite different from each other. $ {{\rho }^{*}} $-mixing is also a wide range class of dependent structures, which was firstly introduced to the limit theorems by Bradley [4]. From then on, many scholars investigated the limit theory for $ {{\rho }^{*}} $-mixing random variables, and a number of important applications for $ {{\rho }^{*}} $-mixing have been established. For more details, we refer to [12,16,18,19,21,23,24] among others.

    The concept of complete convergence was firstly given by Hsu and Robbins[9] as follows: A sequence of random variables $ \left\{ {{X}_{n}}; n\ge 1 \right\} $ converges completely to a constant $ \lambda $ if $ \sum\limits_{n = 1}^{\infty }{P\left(\left| {{X}_{n}}-\lambda \right| > \varepsilon \right)} < \infty $ for all $ \varepsilon > 0 $. By the Borel-Cantelli lemma, the above result implies that $ {{X}_{n}}\to \lambda $ almost surely (a.s.). Thus, the complete convergence plays a crucial role in investigating the limit theory for summation of random variables as well as weighted sums.

    Chow [8] introduced the following notion of complete moment convergence: Let $ \left\{ {{Z}_{n}}; n\ge 1 \right\} $ be a sequence of random variables, and $ {{a}_{n}} > 0 $, $ {{b}_{n}} > 0 $, $ q > 0 $. If $ \sum\limits_{n = 1}^{\infty }{{{a}_{n}}E\left(b_{n}^{-1}\left| {{Z}_{n}} \right|-\varepsilon \right)_{+}^{q}} < \infty $ for all $ \varepsilon \ge 0 $, then the sequence $ \left\{ {{Z}_{n}}; n\ge 1 \right\} $ is called to be the complete $ q $-th moment convergence. It will be shown that the complete moment convergence is the more general version of the complete convergence, and is also much stronger than the latter (see Remark 2.1).

    According to the related statements of Rosalsky and Thành[14] as well as that of Thành[17], we recall the definition of stochastic domination as follows.

    Definition 1.2. A sequence of random variables $ \left\{ {{X}_{n}}; n\ge 1 \right\} $ is said to be stochastically dominated by a random variable $ X $ if for all $ x\ge 0 $ and $ n\ge 1 $,

    $ \begin{equation*} {\mathop {\sup }\limits_{n \ge 1} }\, P\left( \left| {{X}_{n}} \right|\ge x \right)\le P\left( \left| X \right|\ge x \right). \end{equation*} $

    The concept of stochastic domination is a slight generalization of identical distribution. It is clearly seen that stochastic dominance of $ \left\{ {{X}_{n}}; n\ge 1 \right\} $ by the random variable $ X $ implies $ E{{\left| {{X}_{n}} \right|}^{p}}\le E{{\left| X \right|}^{p}} $ if the $ p $-th moment of $ \left| X \right| $ exists, i.e. $ E{{\left| X \right|}^{p}} < \infty $.

    As is known to us all, the weighted sums of random variables are used widely in some important linear statistics (such as least squares estimators, nonparametric regression function estimators and jackknife estimates). Based on this respect, many probability statisticians devote to investigate the probability limiting behaviors for weighted sums of random variables. For example, Bai and Cheng[3], Cai[5], Chen and Sung[6], Cheng et al.[7], Lang et al.[11], Peng et al.[13], Sung[15,16] and Wu[20] among others.

    Recently, Li et al.[12] extended the corresponding result of Chen and Sung[6] from negatively associated random variables to $ {{\rho }^{*}} $-mixing cases by a total different method, and obtained the following theorem.

    Theorem A. Let $ \left\{ X, {{X}_{n}}; n\ge 1 \right\} $ be a sequence of identically distributed $ {{\rho }^{*}} $-mixing random variables with $ E{{X}_{n}} = 0 $, and let $ \left\{ {{a}_{ni}}; 1\le i\le n, n\ge 1 \right\} $ be an array of real constants such that $ \sum\limits_{i = 1}^{n}{{{\left| {{a}_{ni}} \right|}^{\alpha }}} = O\left(n \right) $ for some $ 1 < \alpha \le 2 $. Set $ {{b}_{n}} = {{n}^{1/\alpha }}{{\left(\log n \right)}^{1/\gamma }} $ for $ 0 < \gamma < \alpha $. If $ E{{{\left| X \right|}^{\alpha }}}/{{{\left(\log \left(1+\left| X \right| \right) \right)}^{\alpha /\gamma -1}}}\; < \infty $, then

    $ \begin{equation} \sum\limits_{n = 1}^{\infty }{\frac{1}{n}}P\left( {\mathop {\max }\limits_{1 \le j \le n} }\, \left| \sum\limits_{i = 1}^{j}{{{a}_{ni}}{{X}_{i}}} \right| > \varepsilon {{b}_{n}} \right) < \infty \quad \text{for} \quad\forall \varepsilon > 0. \end{equation} $ (1.3)

    In addition, Huang et al.[10] proved the following complete $ \alpha $-th moment convergence theorem for weighted sums of $ {{\rho }^{*}} $-mixing random variables under some moment conditions.

    Theorem B. Let $ \left\{ {{X}_{n}}; n\ge 1 \right\} $ be a sequence of $ {{\rho }^{*}} $-mixing random variables, which is stochastically dominated by a random variable $ X $, let $ \left\{ {{a}_{ni}}; 1\le i\le n, n\ge 1 \right\} $ be an array of real constants such that $ \sum\limits_{i = 1}^{n}{{{\left| {{a}_{ni}} \right|}^{\alpha }}} = O\left(n \right) $ for some $ 0 < \alpha \le 2 $. Set $ {{b}_{n}} = {{n}^{1/\alpha }}{{\left(\log n \right)}^{1/\gamma }} $ for some $ \gamma > 0 $. Assume further that $ E{{X}_{n}} = 0 $ when $ 1 < \alpha \le 2 $. If

    $ \begin{equation} \begin{array}{ll} E{{|X|}^{\alpha }} < \infty, &\;{\rm{for}}\;\quad\alpha > \gamma, \\ E|X|^{\alpha}\log (1+|X|) < \infty, &\;{\rm{for}}\;\quad \alpha = \gamma, \\ E|X|^{\gamma} < \infty, &\;{\rm{for}}\;\quad \alpha < \gamma, \\ \end{array} \end{equation} $ (1.4)

    then

    $ \begin{equation} \sum\limits_{n = 1}^{\infty }{\frac{1}{n}}E\left( \frac{1}{{{b}_{n}}}{\mathop {\max }\limits_{1 \le j \le n} }\, \left| \sum\limits_{i = 1}^{j}{{{a}_{ni}}{{X}_{i}}} \right|-\varepsilon \right)_{+}^{\alpha } < \infty \quad \text{ for } \forall \varepsilon > 0. \end{equation} $ (1.5)

    It is interesting to find the optimal moment conditions for (1.5). Huang et al.[10] also posed a worth pondering problem whether the result (1.5) holds for the case $ \alpha > \gamma $ under the almost optimal moment condition $ E{{{\left| X \right|}^{\alpha }}}/{{{\left(\log \left(1+\left| X \right| \right) \right)}^{\alpha /\gamma -1}}}\; < \infty $?

    Mainly inspired by the related results of Li et al.[12], Chen and Sung[6] and Huang et al.[10], the authors will further study the convergence rate for weighted sums of $ {{\rho }^{*}} $-mixing random variables without assumptions of identical distribution. Under the almost optimal moment condition $ E{{{\left| X \right|}^{\alpha }}}/{{{\left(\log \left(1+\left| X \right| \right) \right)}^{\alpha /\gamma -1}}}\; < \infty $ for $ 0 < \gamma < \alpha $ with $ 1 < \alpha \le 2 $, a version of the complete $ \alpha $-th moment convergence theorem for weighted sums of $ {{\rho }^{*}} $-mixing random variables is established. The main result not only improves the corresponding ones of Li et al.[12], Chen and Sung[6], but also partially settles the open problem posed by Huang et al.[10].

    Now, we state the main result as follows. Some important auxiliary lemmas and the proof of the theorem will be detailed in the next section.

    Theorem 1.1. Let $ \left\{ {{X}_{n}}; n\ge 1 \right\} $ be a sequence of $ {{\rho }^{*}} $-mixing random variables with $ E{{X}_{n}} = 0 $, which is stochastically dominated by a random variable $ X $, let $ \left\{ {{a}_{ni}}; 1\le i\le n, n\ge 1 \right\} $ be an array of real constants such that $ \sum\limits_{i = 1}^{n}{{{\left| {{a}_{ni}} \right|}^{\alpha }}} = O\left(n \right) $ for some $ 0 < \alpha \le 2 $. Set $ {{b}_{n}} = {{n}^{1/\alpha }}{{\left(\log n \right)}^{1/\gamma }} $ for $ \gamma > 0 $. If $ E{{{\left| X \right|}^{\alpha }}}/{{{\left(\log \left(1+\left| X \right| \right) \right)}^{\alpha /\gamma -1}}}\; < \infty $ for $ \alpha > \gamma $ with $ 1 < \alpha \le 2 $, then (1.5) holds.

    Throughout this paper, let $ I\left(A \right) $ be the indicator function of the event $ A $ and $ I(A, B) = I(A\bigcap B) $. The symbol $ C $ always presents a positive constant, which may be different in various places, and $ {{a}_{n}} = O\left({{b}_{n}} \right) $ stands for $ {{a}_{n}}\le C{{b}_{n}} $.

    To prove our main result of this paper, we need the following important lemmas.

    Lemma 2.1. (Utev and Peligrad[18]) Let $ p\ge 2 $, $ \left\{ {{X}_{n}}; n\ge 1 \right\} $ be a sequence of $ {{\rho }^{*}} $-mixing random variables with $ E{{X}_{n}} = 0 $ and $ E{{\left| {{X}_{n}} \right|}^{p}} < \infty $ for all $ n\ge 1 $. Then there exists a positive constant $ C $ depending only on $ p $, $ s $ and $ {{\rho }^{*}}\left(s \right) $ such that

    $ \begin{equation} E\left( {\mathop {\max }\limits_{1 \le j \le n} }\, {{\left| \sum\limits_{i = 1}^{j}{{{X}_{i}}} \right|}^{p}} \right)\le C\left( \sum\limits_{i = 1}^{n}{E{{\left| {{X}_{i}} \right|}^{p}}}+{{\left( \sum\limits_{i = 1}^{n}{EX_{i}^{2}} \right)}^{p/2}} \right). \end{equation} $ (2.1)

    In particular, if $ p = 2 $,

    $ \begin{equation} E\left( {\mathop {\max }\limits_{1 \le j \le n} }\, {{\left| \sum\limits_{i = 1}^{j}{{{X}_{i}}} \right|}^{2}} \right)\le C\sum\limits_{i = 1}^{n}{EX_{i}^{2}}. \end{equation} $ (2.2)

    The following one is a basic property for stochastic domination. For the details, one refers to Adler and Rosalsky[1] and Adler et al.[2], or Wu[22]. In fact, we can remove the constant $ C $ in those of Adler and Rosalsky[1] and Adler et al.[2], or Wu[22], since it was proved in Reference [[14], Theorem 2.4] (or [[17], Corollary 2.3]) that this is indeed equivalent to $ C = 1 $.

    Lemma 2.2. Let $ \left\{ {{X}_{n}}, n\ge 1 \right\} $ be a sequence of random variables which is stochastically dominated by a random variable $ X $. For all $ \beta > 0 $ and $ b > 0 $, the following statements hold:

    $ \begin{equation} E{{\left| {{X}_{n}} \right|}^{\beta }}I\left( \left| {{X}_{n}} \right|\le b \right)\le \left( E{{\left| X \right|}^{\beta }}I\left( \left| X \right|\le b \right)+{{b}^{\beta }}P\left( \left| X \right| > b \right) \right), \end{equation} $ (2.3)
    $ \begin{equation} E{{\left| {{X}_{n}} \right|}^{\beta }}I\left( \left| {{X}_{n}} \right| > b \right)\le E{{\left| X \right|}^{\beta }}I\left( \left| X \right| > b \right). \end{equation} $ (2.4)

    Consequently, $ E{{\left| {{X}_{n}} \right|}^{\beta }}\le E{{\left| X \right|}^{\beta }} $.

    Lemma 2.3. Under the conditions of Theorem 1.1, if $ E{{{\left| X \right|}^{\alpha }}}/{{{\left(\log \left(1+\left| X \right| \right) \right)}^{\alpha /\gamma -1}}}\; < \infty $ for $ 0 < \gamma < \alpha $ with $ 0 < \alpha \le 2 $, then

    $ \begin{equation} \sum\limits_{n = 1}^{\infty }{\frac{1}{n}\int_{1}^{\infty }{\sum\limits_{i = 1}^{n}{P\left( \left| {{a}_{ni}}{{X}_{i}} \right| > {{b}_{n}}{{t}^{1/\alpha }} \right)}dt}} < \infty. \end{equation} $ (2.5)

    Proof. By Definition 1.2, noting that

    $ \begin{eqnarray} \sum\limits_{n = 1}^{\infty }{\frac{1}{n}\int_{1}^{\infty }{\sum\limits_{i = 1}^{n}{P\left( \left| {{a}_{ni}}{{X}_{i}} \right| > {{b}_{n}}{{t}^{1/\alpha }} \right)}dt}}&\le& \sum\limits_{n = 1}^{\infty }{\frac{1}{n}\int_{1}^{\infty }{\sum\limits_{i = 1}^{n}{P\left( \left| {{a}_{ni}}X \right| > {{b}_{n}}{{t}^{1/\alpha }} \right)}dt}} \\ &\le& \sum\limits_{n = 1}^{\infty }{\frac{1}{n}\int_{0}^{\infty }{\sum\limits_{i = 1}^{n}{P\left( \frac{{{\left| {{a}_{ni}}X \right|}^{\alpha }}}{b_{n}^{\alpha }} > t \right)}dt}} \\ &\le& \sum\limits_{n = 1}^{\infty }{{{n}^{-1}}b_{n}^{-\alpha }\sum\limits_{i = 1}^{n}{E{{\left| {{a}_{ni}}X \right|}^{\alpha }}}}. \end{eqnarray} $ (2.6)

    It is easy to show that

    $ \begin{eqnarray} \sum\limits_{n = 1}^{\infty }{{{n}^{-1}}b_{n}^{-\alpha }\sum\limits_{i = 1}^{n}{{{\left| {{a}_{ni}} \right|}^{\alpha }}E{{\left| X \right|}^{\alpha }}I\left( \left| X \right|\le {{b}_{n}} \right)}}&\le& C\sum\limits_{n = 1}^{\infty }{b_{n}^{-\alpha }E{{\left| X \right|}^{\alpha }}I\left( \left| X \right|\le {{b}_{n}} \right)} \\ &\le& C\sum\limits_{n = 1}^{\infty }{b_{n}^{-\alpha }\sum\limits_{k = 1}^{n}{E{{\left| X \right|}^{\alpha }}I\left( {{b}_{k}} < \left| X \right|\le {{b}_{k+1}} \right)}} \\ &\le& C\sum\limits_{k = 1}^{\infty }{E{{\left| X \right|}^{\alpha }}I\left( {{b}_{k}} < \left| X \right|\le {{b}_{k+1}} \right){{\left( \log k \right)}^{1-\left( \alpha /\gamma \right)}}} \\ &\le& CE{{{\left| X \right|}^{\alpha }}}/{{{\left( \log \left( 1+\left| X \right| \right) \right)}^{\left( \alpha /\gamma \right)-1}}}\; < \infty, \end{eqnarray} $ (2.7)

    and

    $ \begin{eqnarray} \sum\limits_{n = 1}^{\infty }{{{n}^{-1}}b_{n}^{-\alpha }\sum\limits_{i = 1}^{n}{{{\left| {{a}_{ni}} \right|}^{\alpha }}E{{\left| X \right|}^{\alpha }}I\left( \left| X \right| > {{b}_{n}} \right)}}&\le& C\sum\limits_{n = 1}^{\infty }{b_{n}^{-\alpha }E{{\left| X \right|}^{\alpha }}I\left( \left| X \right| > {{b}_{n}} \right)} \\ & = &C\sum\limits_{n = 1}^{\infty }{b_{n}^{-\alpha }\sum\limits_{j = n}^{\infty }{E{{\left| X \right|}^{\alpha }}I\left( {{b}_{j}} < \left| X \right|\le {{b}_{j+1}} \right)}} \\ & = &C\sum\limits_{j = 1}^{\infty }{E{{\left| X \right|}^{\alpha }}I\left( {{b}_{j}} < \left| X \right|\le {{b}_{j+1}} \right)\sum\limits_{n = 1}^{j}{{{n}^{-1}}{{\left( \log n \right)}^{-\alpha /\gamma }}}} \\ &\le& C\sum\limits_{j = 1}^{\infty }{{{\left( \log j \right)}^{1-\left( \alpha /\gamma \right)}}E{{\left| X \right|}^{\alpha }}I\left( {{b}_{j}} < \left| X \right|\le {{b}_{j+1}} \right)} \\ &\le& CE{{{\left| X \right|}^{\alpha }}}/{{{\left( \log \left( 1+\left| X \right| \right) \right)}^{\left( \alpha /\gamma \right)-1}}}\; < \infty. \end{eqnarray} $ (2.8)

    Hence, (2.5) holds by (2.6)–(2.8).

    Proof of Theorem 1.1. For any given $ \varepsilon > 0 $, observing that

    $ \begin{eqnarray} \sum\limits_{n = 1}^{\infty }{\frac{1}{n}}E\left( \frac{1}{{{b}_{n}}}{\mathop {\max }\limits_{1 \le j \le n} }\, \left| \sum\limits_{i = 1}^{j}{{{a}_{ni}}{{X}_{i}}} \right|-\varepsilon \right)_{+}^{\alpha} & = & \sum\limits_{n = 1}^{\infty }{\frac{1}{n}\int_{0}^{\infty }{P\left( \frac{1}{{{b}_{n}}}{\mathop {\max }\limits_{1 \le j \le n} }\, \left| \sum\limits_{i = 1}^{j}{{{a}_{ni}}{{X}_{i}}} \right|-\varepsilon > {{t}^{1/\alpha}} \right)dt}} \\ & = & \sum\limits_{n = 1}^{\infty }{\frac{1}{n}\int_{0}^{1}{P\left( \frac{1}{{{b}_{n}}}{\mathop {\max }\limits_{1 \le j \le n} }\, \left| \sum\limits_{i = 1}^{j}{{{a}_{ni}}{{X}_{i}}} \right| > \varepsilon +{{t}^{1/\alpha}} \right)dt}} \\ &&+ \sum\limits_{n = 1}^{\infty }{\frac{1}{n}\int_{1}^{\infty }{P\left( \frac{1}{{{b}_{n}}}{\mathop {\max }\limits_{1 \le j \le n} }\, \left| \sum\limits_{i = 1}^{j}{{{a}_{ni}}{{X}_{i}}} \right| > \varepsilon +{{t}^{1/\alpha}} \right)dt}} \\ &\le& \sum\limits_{n = 1}^{\infty }{\frac{1}{n}P\left( {\mathop {\max }\limits_{1 \le j \le n} }\, \left| \sum\limits_{i = 1}^{j}{{{a}_{ni}}{{X}_{i}}} \right| > \varepsilon {{b}_{n}} \right)} \\ &&+ \sum\limits_{n = 1}^{\infty }{\frac{1}{n}\int_{1}^{\infty }{P\left( {\mathop {\max }\limits_{1 \le j \le n} }\, \left| \sum\limits_{i = 1}^{j}{{{a}_{ni}}{{X}_{i}}} \right| > {{b}_{n}}{{t}^{1/\alpha}} \right)dt}} \\ &\triangleq& I+J. \end{eqnarray} $ (2.9)

    By Theorem A of Li et al.[12] declared in the first section, we get directly $ I < \infty $. In order to prove (1.5), it suffices to show that $ J < \infty $.

    Without loss of generality, assume that $ {{a}_{ni}}\ge 0 $. For all $ t\ge 1 $ and $ 1\le i\le n $, $ n\in \text{N} $, define

    $ \begin{equation*} {{Y}_{i}} = {{a}_{ni}}{{X}_{i}}I\left( \left| {{a}_{ni}}{{X}_{i}} \right|\le {{b}_{n}}{{t}^{1/\alpha }} \right). \end{equation*} $

    It is easy to check that

    $ \begin{equation*} \left( {\mathop {\max }\limits_{1 \le j \le n} }\, \left| \sum\limits_{i = 1}^{j}{{{a}_{ni}}{{X}_{i}}} \right| > {{b}_{n}}{{t}^{1/\alpha }} \right)\subset \left( {\mathop {\max }\limits_{1 \le j \le n} }\, \left| \sum\limits_{i = 1}^{j}{{{Y}_{i}}} \right| > {{b}_{n}}{{t}^{1/\alpha }} \right)\bigcup \left( \bigcup\limits_{i = 1}^{n}{\left( \left| {{a}_{ni}}{{X}_{i}} \right| > {{b}_{n}}{{t}^{1/\alpha }} \right)} \right), \end{equation*} $

    which implies

    $ \begin{eqnarray} P\left( {\mathop {\max }\limits_{1 \le j \le n} }\, \left| \sum\limits_{i = 1}^{j}{{{a}_{ni}}{{X}_{i}}} \right| > {{b}_{n}}{{t}^{1/\alpha }} \right)&\le& P\left( {\mathop {\max }\limits_{1 \le j \le n} }\, \left| \sum\limits_{i = 1}^{j}{{{Y}_{i}}} \right| > {{b}_{n}}{{t}^{1/\alpha }} \right) \\ && +P\left( \bigcup\limits_{i = 1}^{n}{\left( \left| {{a}_{ni}}{{X}_{i}} \right| > {{b}_{n}}{{t}^{1/\alpha }} \right)} \right). \end{eqnarray} $ (2.10)

    To prove $ J < \infty $, we need only to show that

    $ \begin{equation*} {{J}_{1}} = \sum\limits_{n = 1}^{\infty }{\frac{1}{n}\int_{1}^{\infty }{P\left( {\mathop {\max }\limits_{1 \le j \le n} }\, \left| \sum\limits_{i = 1}^{j}{{{Y}_{i}}} \right| > {{b}_{n}}{{t}^{1/\alpha }} \right)dt}} < \infty, \end{equation*} $
    $ \begin{equation*} {{J}_{2}} = \sum\limits_{n = 1}^{\infty }{\frac{1}{n}\int_{1}^{\infty }{P\left( \bigcup\limits_{i = 1}^{n}{\left( \left| {{a}_{ni}}{{X}_{i}} \right| > {{b}_{n}}{{t}^{1/\alpha }} \right)} \right)dt}} < \infty. \end{equation*} $

    Since

    $ P\left( \bigcup\limits_{i = 1}^{n}{\left( \left| {{a}_{ni}}{{X}_{i}} \right| > {{b}_{n}}{{t}^{1/\alpha }} \right)} \right)\le \sum\limits_{i = 1}^{n}{P\left( \left| {{a}_{ni}}{{X}_{i}} \right| > {{b}_{n}}{{t}^{1/\alpha }} \right)}, $

    it follows from Lemma 2.3 that

    $ \begin{equation*} {{J}_{2}}\le \sum\limits_{n = 1}^{\infty }{\frac{1}{n}\int_{1}^{\infty }{\sum\limits_{i = 1}^{n}{P\left( \left| {{a}_{ni}}{{X}_{i}} \right| > {{b}_{n}}{{t}^{1/\alpha }} \right)}dt}} < \infty. \end{equation*} $

    Next, we prove that

    $ \begin{equation} {\mathop {\sup }\limits_{t \ge 1} }\, \frac{1}{{{b}_{n}}{{t}^{1/\alpha }}}{\mathop {\max }\limits_{1 \le j \le n} }\, \left| \sum\limits_{i = 1}^{j}{E{{Y}_{i}}} \right|\to 0. \end{equation} $ (2.11)

    By $ E{{X}_{n}} = 0 $ and (2.4) of Lemma 2.2, it follows that

    $ \begin{array}{l} {\mathop {\sup }\limits_{t \ge 1} }\, \frac{1}{{{b}_{n}}{{t}^{1/\alpha }}}{\mathop {\max }\limits_{1 \le j \le n} }\, \left| \sum\limits_{i = 1}^{j}{E{{Y}_{i}}} \right| = {\mathop {\sup }\limits_{t \ge 1} }\, \frac{1}{{{b}_{n}}{{t}^{1/\alpha }}}{\mathop {\max }\limits_{1 \le j \le n} }\, \left| \sum\limits_{i = 1}^{j}{E{{a}_{ni}}{{X}_{i}}I\left( \left| {{a}_{ni}}{{X}_{i}} \right|\le {{b}_{n}}{{t}^{1/\alpha }} \right)} \right|\\ = {\mathop {\sup }\limits_{t \ge 1} }\, \frac{1}{{{b}_{n}}{{t}^{1/\alpha }}}{\mathop {\max }\limits_{1 \le j \le n} }\, \left| \sum\limits_{i = 1}^{j}{E{{a}_{ni}}{{X}_{i}}I\left( \left| {{a}_{ni}}{{X}_{i}} \right| > {{b}_{n}}{{t}^{1/\alpha }} \right)} \right|\\ \le C{\mathop {\sup }\limits_{t \ge 1} }\, \frac{1}{{{b}_{n}}{{t}^{1/\alpha }}}\sum\limits_{i = 1}^{n}{E\left| {{a}_{ni}}X \right|I\left( \left| {{a}_{ni}}X \right| > {{b}_{n}}{{t}^{1/\alpha }} \right)}. \end{array} $

    Observe that,

    $ \begin{eqnarray} E\left| {{a}_{ni}}X \right|I\left( \left| {{a}_{ni}}X \right| > {{b}_{n}}{{t}^{1/\alpha }} \right)& = &E\left| {{a}_{ni}}X \right|I\left( \left| {{a}_{ni}}X \right| > {{b}_{n}}{{t}^{1/\alpha }}, \left| X \right|\le {{b}_{n}} \right) \\ &&+E\left| {{a}_{ni}}X \right|I\left( \left| {{a}_{ni}}X \right| > {{b}_{n}}{{t}^{1/\alpha }}, \left| X \right| > {{b}_{n}} \right). \end{eqnarray} $ (2.12)

    For $ 0 < \gamma < \alpha $ and $ 1 < \alpha \le 2 $, it is clearly shown that

    $ \begin{align} & E\left| {{a}_{ni}}X \right|I\left( \left| {{a}_{ni}}X \right| > {{b}_{n}}{{t}^{1/\alpha }}, \left| X \right|\le {{b}_{n}} \right) \le {{C}}b_{n}^{1-\alpha }{{t}^{\left( 1/\alpha \right)-1}}{{\left| {{a}_{ni}} \right|}^{\alpha }}E{{\left| X \right|}^{\alpha }}I\left( \left| X \right|\le {{b}_{n}} \right) \\ & \le {{C}}b_{n}^{1-\alpha }{{t}^{\left( 1/\alpha \right)-1}}{{\left| {{a}_{ni}} \right|}^{\alpha }}E\left( \frac{{{\left| X \right|}^{\alpha }}}{{{\left( \log \left( 1+\left| X \right| \right) \right)}^{\alpha /\gamma -1}}}{{\left( \log \left( 1+\left| X \right| \right) \right)}^{\alpha /\gamma -1}} \right)I\left( \left| X \right|\le {{b}_{n}} \right) \\ & \le {{C}}{{t}^{\left( 1/\alpha \right)-1}}{{n}^{-1+\left( 1/\alpha \right)}}{{\left| {{a}_{ni}} \right|}^{\alpha }}{{(\log n)}^{\left( 1/\gamma \right)-1}}, \end{align} $ (2.13)

    and

    $ \begin{eqnarray} E\left| {{a}_{ni}}X \right|I\left( \left| {{a}_{ni}}X \right| > {{b}_{n}}{{t}^{1/\alpha }}, \left| X \right| > {{b}_{n}} \right)&\le& {{C}}\left| {{a}_{ni}} \right|E\left| X \right|I\left( \left| X \right| > {{b}_{n}} \right) \\ &\le& {{C}}b_{n}^{1-\alpha }{{\left( \log \left( 1+{{b}_{n}} \right) \right)}^{\left( \alpha /\gamma \right)-1}}\left| {{a}_{ni}} \right| \\ &\le& {{C}}{{n}^{-1+\left( 1/\alpha \right)}}{{(\log n)}^{-1+\left( 1/\gamma \right)}}\left| {{a}_{ni}} \right|. \end{eqnarray} $ (2.14)

    Thus,

    $ \begin{eqnarray} {\mathop {\sup }\limits_{t \ge 1} }\, \frac{1}{{{b}_{n}}{{t}^{1/\alpha }}}\sum\limits_{i = 1}^{n}{E\left| {{a}_{ni}}X \right|I\left( \left| {{a}_{ni}}X \right| > {{b}_{n}}{{t}^{1/\alpha }}, \left| X \right|\le {{b}_{n}} \right)}&\le& Cb_{n}^{-1}{{n}^{-1+\left( 1/\alpha \right)}}{{(\log n)}^{\left( 1/\gamma \right)-1}}\sum\limits_{i = 1}^{n}{{{\left| {{a}_{ni}} \right|}^{\alpha }}} \\ &\le& C{{(\log n)}^{-1}}\to 0, \end{eqnarray} $ (2.15)

    and

    $ \begin{eqnarray} {\mathop {\sup }\limits_{t \ge 1} }\, \frac{1}{{{b}_{n}}{{t}^{1/\alpha }}}\sum\limits_{i = 1}^{n}{E\left| {{a}_{ni}}X \right|I\left( \left| {{a}_{ni}}X \right| > {{b}_{n}}{{t}^{1/\alpha }}, \left| X \right| > {{b}_{n}} \right)}&\le& Cb_{n}^{-1}{{n}^{-1+\left( 1/\alpha \right)}}{{(\log n)}^{-1+\left( 1/\gamma \right)}}\sum\limits_{i = 1}^{n}{\left| {{a}_{ni}} \right|} \\ &\le& C{{(\log n)}^{-1}}\to 0. \end{eqnarray} $ (2.16)

    Then, (2.11) holds by the argumentation of (2.12)–(2.16).

    Hence, for $ n $ sufficiently large, we have that $ {\mathop {\max }\limits_{1 \le j \le n} }\, \left| \sum\limits_{i = 1}^{j}{E{{Y}_{i}}} \right|\le \frac{{{b}_{n}}{{t}^{1/\alpha }}}{2} $ holds uniformly for all $ t\ge 1 $. Therefore,

    $ \begin{equation} {{J}_{1}} = \sum\limits_{n = 1}^{\infty }{\frac{1}{n}\int_{1}^{\infty }{P\left( {\mathop {\max }\limits_{1 \le j \le n} }\, \left| \sum\limits_{i = 1}^{j}{\left( {{Y}_{i}}-E{{Y}_{i}} \right)} \right| > \frac{{{b}_{n}}{{t}^{1/\alpha }}}{2} \right)dt}}. \end{equation} $ (2.17)

    By the Markov's inequality, (2.2) of Lemma 2.1 and (2.3) of Lemma 2.2, we get that

    $ \begin{eqnarray} {{J}_{1}}&\le& C\sum\limits_{n = 1}^{\infty }{\frac{1}{n}\int_{1}^{\infty }{\frac{1}{b_{n}^{2}{{t}^{2/\alpha }}}E\left( {\mathop {\max }\limits_{1 \le j \le n} }\, {{\left| \sum\limits_{i = 1}^{j}{\left( {{Y}_{i}}-E{{Y}_{i}} \right)} \right|}^{2}} \right)dt}} \\ &\le& C\sum\limits_{n = 1}^{\infty }{\frac{1}{n}\int_{1}^{\infty }{\frac{1}{b_{n}^{2}{{t}^{2/\alpha }}}\left( \sum\limits_{i = 1}^{n}{E{{\left| {{Y}_{i}}-E{{Y}_{i}} \right|}^{2}}} \right)dt}} \\ &\le& C\sum\limits_{n = 1}^{\infty }{\frac{1}{n}\int_{1}^{\infty }{\frac{1}{b_{n}^{2}{{t}^{2/\alpha }}}\left( \sum\limits_{i = 1}^{n}{E{{\left| {{a}_{ni}}{{X}_{i}} \right|}^{2}}I\left( \left| {{a}_{ni}}{{X}_{i}} \right|\le {{b}_{n}}{{t}^{1/\alpha }} \right)} \right)dt}} \\ &\le& C\sum\limits_{n = 1}^{\infty }{\frac{1}{n}\int_{1}^{\infty }{\frac{1}{b_{n}^{2}{{t}^{2/\alpha }}}\left( \sum\limits_{i = 1}^{n}{E{{\left| {{a}_{ni}}X \right|}^{2}}I\left( \left| {{a}_{ni}}X \right|\le {{b}_{n}}{{t}^{1/\alpha }} \right)} \right)dt}} \\ &&+C\sum\limits_{n = 1}^{\infty }{\frac{1}{n}\int_{1}^{\infty }{\sum\limits_{i = 1}^{n}{P\left( \left| {{a}_{ni}}X \right| > {{b}_{n}}{{t}^{1/\alpha }} \right)}dt}} \\ &\le& C\sum\limits_{n = 1}^{\infty }{\frac{1}{n}\int_{1}^{\infty }{\frac{1}{b_{n}^{2}{{t}^{2/\alpha }}}\left( \sum\limits_{i = 1}^{n}{E{{\left| {{a}_{ni}}X \right|}^{2}}I\left( \left| {{a}_{ni}}X \right|\le {{b}_{n}} \right)} \right)dt}} \\ &&+C\sum\limits_{n = 1}^{\infty }{\frac{1}{n}\int_{1}^{\infty }{\frac{1}{b_{n}^{2}{{t}^{2/\alpha }}}\left( \sum\limits_{i = 1}^{n}{E{{\left| {{a}_{ni}}X \right|}^{2}}I\left( {{b}_{n}} < \left| {{a}_{ni}}X \right|\le {{b}_{n}}{{t}^{1/\alpha }} \right)} \right)dt}} \\ &&+C\sum\limits_{n = 1}^{\infty }{\frac{1}{n}\int_{1}^{\infty }{\sum\limits_{i = 1}^{n}{P\left( \left| {{a}_{ni}}X \right| > {{b}_{n}}{{t}^{1/\alpha }} \right)}dt}} \\ & = &{{J}_{11}}+{{J}_{12}}+{{J}_{13}}. \end{eqnarray} $ (2.18)

    Based on the formula (2.2) of Lemma 2.2 in Li et al.[10], we get that

    $ \begin{eqnarray} {{J}_{11}}& = &\sum\limits_{n = 1}^{\infty }{\frac{1}{n}\int_{1}^{\infty }{\frac{1}{b_{n}^{2}{{t}^{2/\alpha }}}\left( \sum\limits_{i = 1}^{n}{E{{\left| {{a}_{ni}}X \right|}^{2}}I\left( \left| {{a}_{ni}}X \right|\le {{b}_{n}} \right)} \right)dt}} \\ &\le& \sum\limits_{n = 1}^{\infty }{\frac{1}{n}\frac{1}{b_{n}^{\alpha }}\left( \sum\limits_{i = 1}^{n}{E{{\left| {{a}_{ni}}X \right|}^{\alpha }}I\left( \left| {{a}_{ni}}X \right|\le {{b}_{n}} \right)} \right)} < \infty. \end{eqnarray} $ (2.19)

    Denoting $ t = {{x}^{\alpha }} $, by (2.3) of Lemma 2.2, the Markov's inequality and Lemma 2.3, we also get that

    $ \begin{eqnarray} {{J}_{12}}& = &\sum\limits_{n = 1}^{\infty }{\frac{1}{n}\int_{1}^{\infty }{\frac{1}{b_{n}^{2}{{t}^{2/\alpha }}}\left( \sum\limits_{i = 1}^{n}{E{{\left| {{a}_{ni}}X \right|}^{2}}I\left( {{b}_{n}} < \left| {{a}_{ni}}X \right|\le {{b}_{n}}{{t}^{1/\alpha }} \right)} \right)dt}} \\ &\le& C\sum\limits_{n = 1}^{\infty }{\frac{1}{nb_{n}^{2}}\int_{1}^{\infty }{{{x}^{\alpha -3}}\sum\limits_{i = 1}^{n}{E{{\left| {{a}_{ni}}X \right|}^{2}}I\left( {{b}_{n}} < \left| {{a}_{ni}}X \right|\le {{b}_{n}}x \right)}dx}} \\ &\le& C\sum\limits_{n = 1}^{\infty }{\frac{1}{nb_{n}^{2}}\sum\limits_{m = 1}^{\infty }{\int_{m}^{m+1}{{{x}^{\alpha -3}}\sum\limits_{i = 1}^{n}{E{{\left| {{a}_{ni}}X \right|}^{2}}I\left( {{b}_{n}} < \left| {{a}_{ni}}X \right|\le {{b}_{n}}x \right)}dx}}} \\ &\le& C\sum\limits_{n = 1}^{\infty }{\frac{1}{nb_{n}^{2}}\sum\limits_{m = 1}^{\infty }{{{m}^{\alpha -3}}\sum\limits_{i = 1}^{n}{E{{\left| {{a}_{ni}}X \right|}^{2}}I\left( {{b}_{n}} < \left| {{a}_{ni}}X \right|\le {{b}_{n}}\left( m+1 \right) \right)}}} \\ & = &C\sum\limits_{n = 1}^{\infty }{\frac{1}{nb_{n}^{2}}\sum\limits_{i = 1}^{n}{\sum\limits_{m = 1}^{\infty }{\sum\limits_{s = 1}^{m}{{{m}^{\alpha -3}}E{{\left| {{a}_{ni}}X \right|}^{2}}I\left( {{b}_{n}}s < \left| {{a}_{ni}}X \right|\le {{b}_{n}}\left( s+1 \right) \right)}}}} \\ & = &C\sum\limits_{n = 1}^{\infty }{\frac{1}{nb_{n}^{2}}\sum\limits_{i = 1}^{n}{\sum\limits_{s = 1}^{\infty }{E{{\left| {{a}_{ni}}X \right|}^{2}}I\left( {{b}_{n}}s < \left| {{a}_{ni}}X \right|\le {{b}_{n}}\left( s+1 \right) \right)\sum\limits_{m = s}^{\infty }{{{m}^{\alpha -3}}}}}} \\ &\le& C\sum\limits_{n = 1}^{\infty }{\frac{1}{nb_{n}^{2}}\sum\limits_{i = 1}^{n}{\sum\limits_{s = 1}^{\infty }{E{{\left| {{a}_{ni}}X \right|}^{2}}I\left( {{b}_{n}}s < \left| {{a}_{ni}}X \right|\le {{b}_{n}}\left( s+1 \right) \right){{s}^{\alpha -2}}}}} \\ &\le& C\sum\limits_{n = 1}^{\infty }{\frac{1}{nb_{n}^{\alpha }}\sum\limits_{i = 1}^{n}{E{{\left| {{a}_{ni}}X \right|}^{\alpha }}I\left( \left| {{a}_{ni}}X \right| > {{b}_{n}} \right)}} \\ &\le& CE{{{\left| X \right|}^{\alpha }}}/{{{\left( \log \left( 1+\left| X \right| \right) \right)}^{\alpha /\gamma -1}}}\; < \infty. \end{eqnarray} $ (2.20)

    Analogous to the argumentation of Lemma 2.3, it is easy to show that

    $ \begin{equation} {{J}_{13}} = \sum\limits_{n = 1}^{\infty }{\frac{1}{n}\int_{1}^{\infty }{\sum\limits_{i = 1}^{n}{P\left( \left| {{a}_{ni}}X \right| > {{b}_{n}}{{t}^{1/\alpha }} \right)}dt}}\le CE{{{\left| X \right|}^{\alpha }}}/{{{\left( \log \left( 1+\left| X \right| \right) \right)}^{\alpha /\gamma -1}}}\; < \infty. \end{equation} $ (2.21)

    Hence, the desired result $ {{J}_{1}} < \infty $ holds by the above statements. The proof of Theorem 1.1 is completed.

    Remark 2.1. Under the conditions of Theorem 1.1, noting that

    $ \begin{eqnarray} \infty & > & \sum\limits_{n = 1}^{\infty }{\frac{1}{n}}E\left( \frac{1}{{{b}_{n}}}{\mathop {\max }\limits_{1 \le j \le n} }\, \left| \sum\limits_{i = 1}^{j}{{{a}_{ni}}{{X}_{i}}} \right|-\varepsilon \right)_{+}^{\alpha} \\ & = & \sum\limits_{n = 1}^{\infty }{\frac{1}{n}}\int_{0}^{\infty }{P\left( \frac{1}{{{b}_{n}}}{\mathop {\max }\limits_{1 \le j \le n} }\, \left| \sum\limits_{i = 1}^{j}{{{a}_{ni}}{{X}_{i}}} \right|-\varepsilon > {{t}^{1/\alpha}} \right)d}t \\ &\ge& C\sum\limits_{n = 1}^{\infty }{\frac{1}{n}\int_{0}^{{{\varepsilon }^{\alpha }}}{P\left( \frac{1}{{{b}_{n}}}{\mathop {\max }\limits_{1 \le j \le n} }\, \left| \sum\limits_{i = 1}^{j}{{{a}_{ni}}{{X}_{i}}} \right| > \varepsilon +{{t}^{1/\alpha }} \right)}dt} \\ &\ge& C\sum\limits_{n = 1}^{\infty }{\frac{1}{n}}P\left( {\mathop {\max }\limits_{1 \le j \le n} }\, \left| \sum\limits_{i = 1}^{j}{{{a}_{ni}}{{X}_{i}}} \right| > 2\varepsilon{{b}_{n}} \right)\quad \text{for} \quad\forall \varepsilon > 0. \end{eqnarray} $ (2.22)

    Since $ \varepsilon > 0 $ is arbitrary, it follows from (2.22) that the complete moment convergence is much stronger than the complete convergence. Compared with the corresponding results of Li et al.[12], Chen and Sung[6], it is worth pointing out that Theorem 1.1 of this paper is an extension and improvement of those of Li et al.[12], Chen and Sung[6] under the same moment condition. In addition, the main result partially settles the open problem posed by Huang et al.[10] for the case $ 0 < \gamma < \alpha $ with $ 1 < \alpha \le 2 $.

    In this work, we consider the problem of complete moment convergence for weighted sums of weakly dependent (or $ {{\rho }^{*}} $-mixing) random variables. The main results of this paper are presented in the form of the main theorem and a remark as well as Lemma 2.3, which plays a vital role to prove the main theorem. The presented main theorem improves and generalizes the corresponding complete convergence results of Li et al.[12] and Chen and Sung[6].

    The authors are most grateful to the Editor as well as the anonymous referees for carefully reading the manuscript and for offering some valuable suggestions and comments, which greatly enabled them to improve this paper. This paper is supported by the Doctor and Professor Natural Science Foundation of Guilin University of Aerospace Technology.

    All authors declare no conflicts of interest in this paper.



    [1] A. Adler, A. Rosalsky, Some general strong laws for weighted sums of stochastically dominated random variables, Stoch. Anal. Appl., 5 (1987), 1–16. http://doi.org/10.1080/07362998708809104 doi: 10.1080/07362998708809104
    [2] A. Adler, A. Rosalsky, R. L. Taylor, Strong laws of large numbers for weighted sums of random elements in normed linear spaces, Int. J. Math. Math. Sci., 12 (1989), 507–530. http://doi.org/10.1155/s0161171289000657 doi: 10.1155/s0161171289000657
    [3] Z. D. Bai, P. E. Cheng, Marcinkiewicz strong laws for linear statistics, Stat. Probabil. Lett., 46 (2000), 105–112. http://doi.org/10.1016/S0167-7152(99)00093-0 doi: 10.1016/S0167-7152(99)00093-0
    [4] R. C. Bradley, On the spectral density and asymptotic normality of weakly dependent random fields, J. Theor. Probab., 5 (1992), 355–373. http://doi.org/10.1007/BF01046741 doi: 10.1007/BF01046741
    [5] G. H. Cai, Strong laws for weighted sums of NA random variables, Metrika, 68 (2008), 323–331. http://doi.org/10.1007/s00184-007-0160-5 doi: 10.1007/s00184-007-0160-5
    [6] P. Y. Chen, S. H. Sung, On the strong convergence for weighted sums of negatively associated random variables, Stat. Probabil. Lett., 92 (2014), 45–52. http://doi.org/10.1016/j.spl.2014.04.028 doi: 10.1016/j.spl.2014.04.028
    [7] N. Cheng, C. Lu, J. B. Qi, X. J. Wang, Complete moment convergence for randomly weighted sums of extended negatively dependent random variables with application to semiparametric regression models, Stat. Pap., 63 (2022), 397–419. http://doi.org/10.1007/s00362-021-01244-1 doi: 10.1007/s00362-021-01244-1
    [8] Y. S. Chow, On the rate of moment complete convergence of sample sums and extremes, Bull. Inst. Math. Acad. Sinica, 16 (1988), 177–201.
    [9] P. L. Hsu, H. Robbins, Complete convergence and the law of large numbers, Proc. Nat. Acad. Sci., 33 (1947), 25–31. https://doi.org/10.1073/pnas.33.2.25 doi: 10.1073/pnas.33.2.25
    [10] H. W. Huang, H. Zou, Y. H. Feng, F. X. Feng, A note on the strong convergence for weighted sums of ${{\rho }^{*}}$-mixing random variables, J. Math. Inequal., 12 (2018), 507–516. https://doi.org/10.7153/jmi-2018-12-37 doi: 10.7153/jmi-2018-12-37
    [11] J. J. Lang, L. Cheng, Z. Q. Yu, Y. Wu, X. J. Wang, Complete $f$-moment convergence for randomly weighted sums of extended negatively dependent random variables and its statistical application, Theor. Probab. Appl., 67 (2022), 327–350. https://doi.org/10.4213/tvp5399 doi: 10.4213/tvp5399
    [12] W. Li, P. Y. Chen, S. H. Sung, Remark on convergence rate for weighted sums of ${{\rho }^{*}}$-mixing random variables, RACSAM, 111 (2017), 507–513. https://doi.org/10.1007/s13398-016-0314-2 doi: 10.1007/s13398-016-0314-2
    [13] Y. J. Peng, X. Q. Zheng, W. Yu, K. X. He, X. J. Wang, Remark on convergence rate for weighted sums of ${{\rho }^{*}}$-mixing random variables, J. Syst. Sci. Complex, 35 (2022), 342–360. https://doi.org/10.1007/s11424-020-0098-5 doi: 10.1007/s11424-020-0098-5
    [14] A. Rosalsky, L. V. Thành, A note on the stochastic domination condition and uniform integrability with applications to the strong law of large numbers, Stat. Probabil. Lett., 178 (2021), 109181. https://doi.org/10.1016/j.spl.2021.109181 doi: 10.1016/j.spl.2021.109181
    [15] S. H. Sung, On the strong convergence for weighted sums of random variables, Stat. Pap., 52 (2011), 447–454. https://doi.org/10.1007/s00362-009-0241-9 doi: 10.1007/s00362-009-0241-9
    [16] S. H. Sung, On the strong convergence for weighted sums of ${{\rho }^{*}}$-mixing random variables, Stat. Pap., 54 (2013), 773–781. https://doi.org/10.1007/s00362-012-0461-2 doi: 10.1007/s00362-012-0461-2
    [17] L. V. Thành, On a new concept of stochastic domination and the laws of large numbers, Test, 2022 (2022), 1–33. https://doi.org/10.1007/s11749-022-00827-w doi: 10.1007/s11749-022-00827-w
    [18] S. Utev, M. Peligrad, Maximal inequalities and an invariance principle for a class of weakly dependent random variables, J. Theor. Probab., 16 (2003), 101–115. https://doi.org/10.1023/A:1022278404634 doi: 10.1023/A:1022278404634
    [19] X. J. Wang, X. Deng, F. X. Xia, S. H. Hu, The consistency for the estimators of semiparametric regression model based on weakly dependent errors, Stat. Pap., 58 (2017), 303–318. https://doi.org/10.1007/s00362-015-0698-7 doi: 10.1007/s00362-015-0698-7
    [20] W. B. Wu, On the strong convergence of a weighted sum, Stat. Probabil. Lett., 44 (1999), 19–22. https://doi.org/10.1016/S0167-7152(98)00287-9 doi: 10.1016/S0167-7152(98)00287-9
    [21] Y. F. Wu, S. H. Sung, A. Volodin, A note on the rates of convergence for weighted sums of ${{\rho }^{*}}$-mixing random variables, Lith. Math. J., 54 (2014), 220–228. https://doi.org/10.1007/s10986-014-9239-7 doi: 10.1007/s10986-014-9239-7
    [22] Q. Y. Wu, Probability limit theory for mixing sequences, Beijing: Science Press of China, 2006.
    [23] Q. Y. Wu, Y. Y. Jiang, Some strong limit theorems for $\tilde{\rho }$-mixing sequences of random variables, Stat. Probabil. Lett., 78 (2008), 1017–1023. https://doi.org/10.1016/j.spl.2007.09.061 doi: 10.1016/j.spl.2007.09.061
    [24] X. C. Zhou, C. C. Tan, J. G. Lin, On the strong laws for weighted sums of ${{\rho }^{*}}$-mixing random variables, J. Inequal. Appl., 2011 (2011), 157816. https://doi.org/10.1155/2011/157816 doi: 10.1155/2011/157816
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1745) PDF downloads(89) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog