Loading [MathJax]/jax/output/SVG/jax.js
Research article

On moment convergence for some order statistics

  • Received: 25 March 2022 Revised: 26 May 2022 Accepted: 06 June 2022 Published: 20 July 2022
  • MSC : 62F10, 62G30

  • By exploring the uniform integrability of a sequence of some order statistics (OSs), we obtain the moment convergence conclusion of the sequence under some weak conditions even when the corresponding population of interest has no moment of any positive order. As an application, we embody the range of applications of a theorem presented in a reference dealing with the approximation of the difference between the moment of a sequence of normalized OSs and the corresponding moment of a standard normal distribution. By the aid of the embodied theorem, we explore the infinitesimal type of the moments of errors when we estimate some population quantiles by relative OSs. Finally, by the obtained conclusion, we can easily get a combination formula which seems hard to be proved in other methods.

    Citation: Jin-liang Wang, Chang-shou Deng, Jiang-feng Li. On moment convergence for some order statistics[J]. AIMS Mathematics, 2022, 7(9): 17061-17079. doi: 10.3934/math.2022938

    Related Papers:

    [1] Lunyi Liu, Qunying Wu . Complete integral convergence for weighted sums of negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(9): 22319-22337. doi: 10.3934/math.20231138
    [2] Mingzhou Xu, Xuhang Kong . Note on complete convergence and complete moment convergence for negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(4): 8504-8521. doi: 10.3934/math.2023428
    [3] Chengcheng Jia, Qunying Wu . Complete convergence and complete integral convergence for weighted sums of widely acceptable random variables under the sub-linear expectations. AIMS Mathematics, 2022, 7(5): 8430-8448. doi: 10.3934/math.2022470
    [4] Mikail Et, M. Çagri Yilmazer . On deferred statistical convergence of sequences of sets. AIMS Mathematics, 2020, 5(3): 2143-2152. doi: 10.3934/math.2020142
    [5] Lian-Ta Su, Kuldip Raj, Sonali Sharma, Qing-Bo Cai . Applications of relative statistical convergence and associated approximation theorem. AIMS Mathematics, 2022, 7(12): 20838-20849. doi: 10.3934/math.20221142
    [6] Mikail Et, Muhammed Cinar, Hacer Sengul Kandemir . Deferred statistical convergence of order α in metric spaces. AIMS Mathematics, 2020, 5(4): 3731-3740. doi: 10.3934/math.2020241
    [7] Shuyan Li, Qunying Wu . Complete integration convergence for arrays of rowwise extended negatively dependent random variables under the sub-linear expectations. AIMS Mathematics, 2021, 6(11): 12166-12181. doi: 10.3934/math.2021706
    [8] B. B. Jena, S. K. Paikray, S. A. Mohiuddine, Vishnu Narayan Mishra . Relatively equi-statistical convergence via deferred Nörlund mean based on difference operator of fractional-order and related approximation theorems. AIMS Mathematics, 2020, 5(1): 650-672. doi: 10.3934/math.2020044
    [9] Mingzhou Xu . Complete convergence and complete moment convergence for maximal weighted sums of extended negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(8): 19442-19460. doi: 10.3934/math.2023992
    [10] Haiwu Huang, Yuan Yuan, Hongguo Zeng . An extension on the rate of complete moment convergence for weighted sums of weakly dependent random variables. AIMS Mathematics, 2023, 8(1): 622-632. doi: 10.3934/math.2023029
  • By exploring the uniform integrability of a sequence of some order statistics (OSs), we obtain the moment convergence conclusion of the sequence under some weak conditions even when the corresponding population of interest has no moment of any positive order. As an application, we embody the range of applications of a theorem presented in a reference dealing with the approximation of the difference between the moment of a sequence of normalized OSs and the corresponding moment of a standard normal distribution. By the aid of the embodied theorem, we explore the infinitesimal type of the moments of errors when we estimate some population quantiles by relative OSs. Finally, by the obtained conclusion, we can easily get a combination formula which seems hard to be proved in other methods.



    Order statistic (OS) plays an important role in nonparametric statistics. Under the assumption of large sample size, relative investigations are mainly focused on asymptotic distributions of some functions of these OSs. Among these studies, the elegant one provided by Bahadur in 1966 (see [1]) is the central limit theorem on OSs. As was revealed there, under the situation of an absolute continuous population, the sequence of some normalized OSs usually has an asymptotic standard normal distribution. That is useful in the construction of a confidence interval for estimating some certain quantile of the population. Comparatively, study on some moment convergence of the mentioned sequence is also significant, for instance, if we utilize a sample quantile as an asymptotic unbiased estimator for the corresponding quantile of the population, then the analysis of the second moment convergence of the sequence is significant if we want to make an approximation of the mean square error of the estimate.

    However, the analysis of moment convergence of OSs is usually very difficult, the reason, as was interpreted by Thomas and Sreekumar in [2], may lie in the fact that the moment of OS is usually very difficult to obtain.

    For a random sequence, although it is well-known that the convergence in distribution does not necessarily guarantee the corresponding moment convergence, usually, that obstacle can be sufficiently overcome by the additional requirement of the uniform integrability of the sequence. For instance, we can see [3] as a reference dealing with some extreme OSs under some populations. In that article Wang et al. discussed uniform integrability of the sequence of some normalized extreme OSs and derived equivalent moment expressions there.

    Here in the following theorem we discuss the moment convergence for some common OSs rather than extreme ones.

    Theorem 1. For a population X distributed according to a continuous probability density function (pdf) f(x), let p(0,1) and xp be the pquantile of X satisfying f(xp)>0. Let (X1,...,Xn) be a random sample arising from X and Xi:n be the ith OS. If the cumulative distribution function (cdf) F(x) of X has an inverse function G(x) satisfying

    |G(x)|Bxq(1x)q (1.1)

    for some constants B>0,q0 and all x(0,1), then for arbitrary δ>0, we have

    limnEXδi:n=xδp,

    provided limn+i/n=p or equivalently rewritten as i/n=p+o(1).

    Remark 1. Now we use the symbol z for the integer part of a positive number z and mn,p for the p-quantile of a random sample (X1,,Xn), namely, mn,p=(Xpn:n+Xpn+1:n)/2 if pn is an integer and mn,p=Xpn+1:n otherwise. As both limiting conclusions limnEXδpn:n=xδp and limnEXδpn+1:n=xδp hold under the conditions of Theorem 1 and mδn,p is always squeezed by Xδpn:n and Xδpn+1:n, according to the Sandwich Theorem, we have limnEmδn,p=xδp.

    Remark 2. For a continuous function H(x) where x(0,1), if

    limx0+H(x)=limx1H(x)=0,

    then there is a constant C>0 such that the inequality |H(x)|C holds for all x(0,1). By that reason, the condition (1.1) can be replaced by the statement that there exists some constant V0 such that

    limx0+G(x)xV(1x)V=limx1G(x)xV(1x)V=0.

    Remark 3. As the conclusion is on moment convergence of OSs, one may think that the moment of the population X in Theorem 1 should exist. That is a misunderstanding because the existence of the moment of the population is actually unnecessary. We can verify that by a population according to the well-known Cauchy distribution Xf(x)=1π(1+x2) where x(,+), in this case, the moment EX of the population does not exist whereas the required conditions in Theorem 1 are satisfied. Even for some population without any moment of positive order, the conclusion of Theorem 1 still holds, for instance, if f(x)=1x(ln(x))2I[e,)(x) (where the symbol IA(x) or IA stands for the indicator function of a set A), then we have the conclusion

    G(x)=e1x1I(0,1)(x),

    which leads to

    limx0+G(x)x(1x)=limx1G(x)x(1x)=0,

    and therefore the condition (1.1) holds, thus we can see that Theorem 1 is workable. That denies the statement in the final part of paper [4] exclaiming that under the situation Xf(x)=1x(ln(x))2I[e,)(x) any OS does not have any moment of positive order.

    According to Theorem 1, we known that the OS Xi:n of interest is an asymptotic unbiased estimator of the corresponding population quantile xp. Now we explore the infinitesimal type of the mean error of the estimate and derive

    Theorem 2. Let (X1,...,Xn) be a random sample from X who possesses a continuous pdf f(x). Let p(0,1) and xp be the pquantile of X satisfying f(xp)>0 and Xi:n be the ith OS. If the cdf F(x) of X has an inverse function G(x) with a continuous derivative function G(x) in (0,1) and there is a constant U0 such that

    limx0+(G(x)xU(1x)U)=limx1(G(x)xU(1x)U)=0, (1.2)

    then under the assumption i/n=p+O(n1) which indicates the existence of the limit limx0+i/np1/n, the following proposition stands

    |E(Xi:nxp)|=O(1/n). (1.3)

    Remark 4. Obviously we can see that |E(mn,pxp)|=O(1/n) under the conditions of Theorem 2.

    For i.i.d random variables(RVs) X1,...,Xn with an identical expectation μ and a common finite standard deviation σ>0, the famous Levy-Lindeberg central limit theorem reveals that the sequence of normalized sums

    {ni=1Xinμnσ,n1}

    converges in distribution to the standard normal distribution N(0,12) which we denote that as

    ni=1XinμnσDN(0,12).

    In 1964, Bengt presented his work [5] showing that if it is further assumed that E|X1|k<+ for some specific positive k, then the m-th moment convergence conclusion

    E(ni=1Xinμnσ)mEZm,n+, (1.4)

    holds for any positive m satisfying mk. Here and throughout our paper, we denote Z a RV of standard normal distribution N(0,12).

    Let f(x) be a continuous pdf of a population X and xr be the rquantile of X satisfying f(xr)>0. Like the Levy-Lindeberg central limit theorem, Bahadur interpreted in [1] (1966) that for the OS Xi:n, following convergence conclusion holds

    f(xr)(Xi:nxr)r(1r)/nDN(0,12),

    provided i/nr as n.

    Later in 1967, Peter studied moment convergence on similar topic. He obtained in [6] that for some ε>0, r(0,1) and pn=i/n, if the limit condition

    limxxε[1F(x)+F(x)]=0

    holds, then the conclusion

    E(Xi:nxin+1)k=[pn(1pn)/nf(xpn)]kxk2πex2/2dx+o(nk/2)

    is workable for positive integer k and rni(1r)n as n+.

    In addition to the mentioned reference dealing with moment convergence on OSs, we find some more desirable conclusions on similar topic provided by Reiss in reference [7] in 1989, from which we excerpt the one of interest as what follows.

    Theorem 3. Respectively let f(x) and F(x) be the pdf and cdf of a population X. Let p(0,1) and xp be the pquantile of X satisfying f(xp)>0. Assume that on a neighborhood of xp the cdf F(x) has m+1 bounded derivatives. If a positive integer i satisfies i/n=p+O(n1) and E|Xs:j|< holds for some positive integer j and s{1,...,j} and a measurable function h(x) meets the requirement |h(x)||x|k for some positive integer k, then

    Eh(n1/2f(xp)(Xi:nxp)p(1p))=h(x)d(Φ(x)+φ(x)m1i=1ni/2Si,n(x))+O(nm/2). (1.5)

    Here the function φ(x) and Φ(x) are respectively the pdf and cdf of a standard normal distribution while Si,n(x), a polynomial of x with degree not more than 3i1 and coefficients uniformly bounded over n, especially

    S1,n(x)=[2q13p(1p)+p(1p)f(xp)2(f(xp))2]x2+npi+1pp(1p)+2(2p1)3p(1p).

    Remark 5. By putting h(x)=x2 and m=2, we derive under the conditions of Theorem 3 that as n+,

    E(n1/2f(xp)(Xi:nxp)p(1p))2=x2d(Φ(x)+φ(x)n12S1,n(x))+O(n1)1.

    Therefore, we see that the sequence

    {E(n1/2f(xp)(Xi:nxp)p(1p))2,nN0}

    is uniformly bounded over nN0. Here N0 is the positive integer number that the moment EX2i:n exists when nN0. In accordance with the inequality |Eξ|Eξ2 if only the moment Eξ2 exists, the sequence

    {E(n1/2f(xp)(Xi:nxp)p(1p)),nN0}

    is also uniformly bounded, say, by a number L over n{N0,N0+1,...}. Now that

    |En1/2f(xp)(Xi:nxp)p(1p)|L,nN0,

    we have

    |E(Xi:nxp)|L[p(1p)/f(xp)]n1/2,nN0. (1.6)

    Under the conditions in Theorem 2, when we estimate a population quantile xp by an OS Xi:n, usually the estimate is not likely unbiased, compared with the two conclusions (1.3) and (1.6), the result (1.3) in Theorem 2 is more accurate.

    Remark 6. For a random sample (Y1,Y2,...,Yn) from a uniformly distributed population YU[0,1], we write Yi:n the ith OS. Obviously, conditions in Theorem 3 are fulfilled for any positive integer m2. That yields

    E(n1/2(Yi:np)p(1p))2=x2d(Φ(x)+φ(x)n1/2S1,n(x))+O(n1)=1+O(n1/2),

    and

    E(n1/2(Yi:np)p(1p))6=x6d(Φ(x)+φ(x)5i=1ni/2Si,n(x))+O(n3)=x6φ(x)dx+5i=1αi(n)ni/2+O(n3)=15+5i=1αi(n)ni/2+O(n3),

    where for each i=1,2,...,5, αi(n) is uniformly bounded over n.

    As is above analyzed, we conclude that under the assumption i/n=p+O(n1),

    E(Yi:np)2p(1p)n1andE(Yi:np)615p3(1p)3n3. (1.7)

    Based on Theorems 1 and 3, here we give some alternative conditions to those in Theorem 3 to embody its range of applications including situations even when the population X in Theorem 3 has no definite moment of any positive order. We obtain:

    Theorem 4. Let (X1,...,Xn) be a random sample derived from a population X who has a continuous pdf f(x). Let p(0,1) and xp be the pquantile of X satisfying f(xp)>0 on a neighborhood of xp and the following three conditions hold,

    (i) The cdf F(x) of X has an inverse function G(x) satisfying

    |G(x)|BxQ(1x)Q (1.8)

    for some constants B>0,Q0 and all x(0,1).

    (ii) F(x) has m+1 bounded derivatives where m is a positive integer.

    (iii) Let i/n=p+O(n1) and ai:n=xp+O(n1) as n+.

    Then the following limiting result holds as n+

    E(f(xp)(Xi:nai:n)p(1p)/n)m=EZm+O(n1/2). (1.9)

    Remark 7. For the mean ¯Xn of the random sample (X1,...,Xn) of a population X whose moment EXm exists, according to conclusion (1.4), we see

    E(¯Xnμ)m=(σn)mEZm+o(nm/2),

    which indicates that the mth central moment of sample mean E(¯Xnμ)m is usually of infinitesimal O(nm/2).

    Here under the conditions of Theorem 4, if EXi,n=xp+O(n1) (we will verify in later section that for almost all continuous populations we may encounter, this assertion holds according to Theorem 2), then by Eq (1.9), we are sure that the central moment E(Xi:nEXi:m)m is also of an infinitesimal O(nm/2). Moreover, by putting ai:n=xp, we derive under the assumptions of Theorem 4 that

    E(f(xp)(Xi:nxp)p(1p)/n)m=EZm+O(n1/2).

    Similar to Remark 1, we can also show by Sandwich Theorem that

    E(f(xp)(mn,pxp)p(1p)/n)m=EZm+O(n1/2) (1.10)

    indicating that if we use the sample pquantile mn,p to estimate xp, the corresponding population pquantile, then E(mn,pxp)m=O(nm/2).

    For estimating a parameter of a population without an expectation, estimators based on functions of sample moments are always futile because of uncontrollable fluctuation. Alternatively, estimators obtained by some functions of OSs are usually workable. To find a desirable one of that kind, approximating some moment expressions of OSs is therefore significant. For instance, let a population X be distributed according to a pdf

    f(x,θ1,θ2)=θ2π[θ22+(xθ1)2],<x<+, (1.11)

    where constants θ2>0 and θ1 is unknown. Here x0.56=0.19076θ2+θ1 and x0.56+x0.44=2x0.5=2θ1. To estimate x0.5=θ1, we now compare estimators mn,0.5 and (mn,0.56+mn,0.44)/2. Under large sample size, we deduce according to conclusion (1.10) that

    E(mn,0.56+mn,0.442θ1)2=E((mn,0.56x0.56)+(mn,0.44x0.44)2)2E(mn,0.56x0.56)2+E(mn,0.44x0.44)22=0.44×0.56(f(x0.56))2n1+O(n3/2)=0.2554πθ2n1+O(n3/2),

    whereas

    E(mn,0.5θ1)2=0.785πθ2n1+O(n3/2).

    Obviously, both estimators mn,0.5 and (mn,0.56+mn,0.44)/2 are unbiased for θ1. For large n, the main part 0.2554πθ2n1 of the mean square error (MSE) E[(mn,0.56+mn,0.44)/2θ1]2 is even less than one-third of 0.785πθ2n1, the main part of the MSE E(mn,0.5θ1)2. That is the fundamental reason why Sen obtained in [8] the conclusion that the named optimum mid-range (mn,0.56+mn,0.44)/2 is more effective than the sample median mn,0.5 in estimating θ1.

    By statistical comparison of the scores presented in following Table 1 standing for 30 returns of closing prices of German Stock Index(DAX), Mahdizadeh and Zamanzade reasonably applied the previously mentioned Cauchy distribution (1.11) as a stock market return distribution with θ1 and θ2 being respectively estimated as ^θ1=0.0009629174 and ^θ2=0.003635871 (see [9]).

    Table 1.  Scores for 30 returns of closing prices of DAX.
    0.0011848 -0.0057591 -0.0051393 -0.0051781 0.0020043 0.0017787
    0.0026787 -0.0066238 -0.0047866 -0.0052497 0.0004985 0.0068006
    0.0016206 0.0007411 -0.0005060 0.0020992 -0.0056005 0.0110844
    -0.0009192 0.0019014 -0.0042364 0.0146814 -0.0002242 0.0024545
    -0.0003083 -0.0917876 0.0149552 0.0520705 0.0117482 0.0087458

     | Show Table
    DownLoad: CSV

    Now we utilize (mn,0.56+mn,0.44)/2 as a quick estimator of θ1 and derive a value 0.00105955 which roughly closes to the estimate value 0.0009629174 in reference [9].

    Even now there are many estimate problems (see [10] for a reference) dealing with situations when a population have no expectation, as above analysis, further study on moment convergence for some OSs may be promising.

    Lemma 1. (see [11] and [12]) For a random sequence {ξ1,ξ2,} converging in distribution to a RV ξ which we write as ξnDξ, if d>0 is a constant and the following uniform integrability holds

    limssupnE|ξn|dI|ξn|ds=0,

    then limnE|ξn|d=E|ξ|d and accordingly limnEξnd=Eξd.

    Remark 8. As discarding some definite number of terms from {ξ1,ξ2,} does not affect the conclusion limnE|ξn|d=E|ξ|d, the above condition lims+supnE|ξn|dI|ξn|ds=0 can be replaced by lims+supnME|ξdn|I|ξdn|s=0 for any positive constant M>0.

    Lemma 2. For p(0,1) and a random sample (ξ1,ξ2,,ξn) from a population possessing a continuous pdf f(x), if the p-quantile xp of the population satisfies f(xp)>0, then for the i-th OS ξi:n where i/n=p+o(1), we have ξi:nDxp.

    Proof. Obviously, the sequence {f(xp)(ξi:nxp)p(1p)/n,n=1,2,...} has an asymptotic standard normal distribution N(0,12), thus we see that the statistic ξi:n converges to xp in probability. That leads to the conclusion ξi:nDxp by the reason that, for a sequence of RVs, the convergence to a constant in probability is equivalent to the convergence in distribution.

    Clarification before presenting the proof:

    ● Under the assumption i/n=p+o(1) when n, we would better think of i as a function of n and use the symbol an instead of i. Nevertheless, for simplicity concern, we prefer no adjustment.

    ● Throughout our paper, C1, C2, are some suitable positive constants.

    As inp(0,1) when n, we only need care large numbers n,i and ni.

    Let an integer K>δq be given and M>0 be such a number that if nM, then all the following inequalities i1δq>0, niδq>0, niK>0 and i+Kn<v=1+p2 hold simultaneously. Here the existence of v in the last inequality is ensured by the fact i+Knp as n.

    According to Lemmas 1 and 2 as well as Remark 8, to prove Theorem 1 we only need to show that

    limsδ+supnME|Xδi:n|I|Xδi:n|sδ=0. (3.1)

    That is

    lims+supnM|u|s|u|δn!(i1)!(ni)!Fi1(u)f(u)[1F(u)]nidu=0.

    To show that equation, it suffices for us to prove respectively

    lims+supnM+s|u|δn!(i1)!(ni)!Fi1(u)f(u)[1F(u)]nidu=0

    and

    lims+supnMs|u|δn!(i1)!(ni)!Fi1(u)f(u)[1F(u)]nidu=0.

    Equivalently by putting x=F(u), we need to prove respectively

    limt1supnM1t|Gδ(x)|n!(i1)!(ni)!xi1(1x)nidx=0 (3.2)

    as well as

    limt0+supnMt0|Gδ(x)|n!(i1)!(ni)!xi1(1x)nidx=0.

    As both proofs are similar in fashion, we chose to prove the Eq (3.2) only. Actually, according to the given condition |G(x)|Bxq(1x)q, we see

    limt1supnM1t|Gδ(x)|n!(i1)!(ni)!xi1(1x)nidxBδlimt1supnM1tn!(i1)!(ni)!xi1δq(1x)niδqdxBδlimt1supnM1tn!(i1)!(ni)!(1x)niδqdxBδlimt1supnMn!(i1)!(ni)!(1t)niK(1t)K+1δqBδlimt1supnMn!(1t)niK(i1)!(ni)!C1limx0+supnMn!×ni!(ni)!xniK. (3.3)

    Here the positive number C1>0 exists because n/i=1/p+o(1) where p(0,1).

    Now applying the Stirling's formula n!=2πn(n/e)neθ12n where θ(0,1) (see [13]), we have

    limx0+supnMn!×ni!(ni)!xniKC2limx0+supnM2πn(n/e)n×n2πi(i/e)i2π(ni)((ni)/e)nixniKC3limx0+supnMnn×niini(ni)nixniK=C3limx0+supnMnnii(ni)ninnixniK=C3limx0+supnM1(in)i(1in)ninnixniK=C3limx0+supnM1[(in)in(1in)1in]nnnixniK. (3.4)

    Noting that

    (in)in(1in)1inpp(1p)1p,

    as n, we see that there exists a positive constant, say Q>0 such that

    (in)in(1in)1inQpp(1p)1p

    for all n. Consequently,

    limx0+supnM1[(in)in(1in)1in]nnnixniKlimx0+supnM1[Qpp(1p)1p]nnnixniKC4limx0+supnM1[Qpp(1p)1p]nnxniK. (3.5)

    Due to the assumptions i+Kn<v=1+p2<1 as nM, we derive

    limx0+supnM1[Qpp(1p)1p]nnxniKlimx0+supnM1[Qpp(1p)1p]nnxnvn=limx0+supnM[x1vQpp(1p)1p]nnlimu0+supn1unn. (3.6)

    Finally, by the fact that if u>0 is given sufficiently small, then the first term of the sequence {unn,n1} is the maximum, thus we can confirm

    limu0+supn1unn=limu0+u=0. (3.7)

    Combining the five conclusions numbered from (3.3) to (3.7), we obtain Eq (3.2).

    Here we would like to assume U>1 (or we may use U+2 instead of U).

    By the reason interpreted in Remark 2 and according to condition (1.2), we see that there is a constant A>0 satisfying

    |G(x)xU(1x)U|A. (3.8)

    Now we define Y=F(X) and Yi:n=F(Xi:n) or equivalently X=G(Y) and Xi:n=G(Yi,n), we have G(p)=xp. Obviously, the conclusions in Remark 6 are workable here.

    By the Taylor expansion formula we have

    G(Yi:n)=G(p)+G(p)(Yi:np)+G(p)2!(Yi:np)2+13!G(ξ)(Yi:np)3,

    where

    ξ(min(Yi:n,p),max(Yi:n,p)).

    Noting that almost surely 0<min(Yi:n,p)<ξ<max(Yi:n,p)<1, we obtain

    |EG(Yi:n)G(p)G(p)E(Yi:np)G(p)2E(Yi:np)2|=|E[G(ξ)3!(Yi:np)3]|16|E[AξU(1ξ)U(Yi:np)3]|16|E{A[p(1p)]UYUi:n(1Yi:n)U(Yi:np)3}|16|E{A[p(1p)]U(Yi:np)3}|16A[p(1p)]UE(Yi:np)6=O(n3/2) (3.9)

    by Eq (3.8). Here the last step is in accordance to (1.7).

    Now we can draw the conclusion that

    EG(Yi:n)G(p)G(p)E(Yi:np)12G(p)E(Yi:np)2=o(n1). (3.10)

    That is

    EXi:nxpG(p)(in+1p)12G(p)E(Yi:np)2=o(n1), (3.11)

    provided i/n=p+O(n1).

    Still according to conclusion (1.7), we have

    E(Yi:np)2=O(n1).

    Finally, as i/n=p+O(n1) also guarantees i/(n+1)p=O(n1), we can complete the proof of E(Xi:nxp)=O(n1) or equivalently

    |E(Xi:nxp)|=O(n1)

    by the assertion of (3.11).

    As EZ=0, the proposition holds when m=1, now we only consider the case of m2. By Theorem 1, we see EX2i:nx2p, therefore E|Xs:j| exists for some integer j and s{1,...,j} and Theorem 3 is workable here when we put h(x)=xm. We derive

    E(n1/2f(xp)(Xi:nxp)p(1p))m=xmd(Φ(x)+φ(x)m1i=1ni/2Si,n(x))+O(nm/2)=EZm+m1i=1(ni/2xmd(φ(x)Si,n(x)))+O(nm/2). (3.12)

    Moreover, for given positive integer m2, as the coefficients in polynomial Si,n(x) are uniformly bounded over n and φ(x)=xφ(x), the sequence of the integrals

    {xmd(φ(x)Si,n(x)),n=1,2,...}

    is also uniformly bounded over n. That indicates that

    E(n1/2f(xp)(Xi:nxp)p(1p))m=EZm+O(n1/2) (3.13)

    according to conclusion (3.12).

    As a consequence, we can conclude that for explicitly given m2 the sequence

    {E(n1/2f(xp)(Xi:nxp)p(1p))m,n=1,2,...} (3.14)

    is uniformly bounded over n. Moreover, due to the inequality

    |E(n1/2f(xp)(Xi:nxp)p(1p))|E(n1/2f(xp)(Xi:nxp)p(1p))2,

    we see that the sequence

    {E(n1/2f(xp)(Xi:nxp)p(1p)),n=1,2,...}

    is also uniformly bounded over n.

    Now that ai:n=xp+O(n1), we complete the proof by the following reasoning

    E(n1/2f(xp)(Xi:nai:n)p(1p))m=E(n1/2f(xp)(Xi:nxp)p(1p)+n1/2f(xp)(xpai:n)p(1p))m=mu=0[(mu)(n1/2f(xp)(xpai:n)p(1p))muE(n1/2f(xp)(Xi:nxp)p(1p))u]=mu=2[(mu)(n1/2f(xp)(xpai:n)p(1p))muE(n1/2f(xp)(Xi:nxp)p(1p))u]+O(n1/2)=mu=2[(mu)(n1/2f(xp)(xpai:n)p(1p))mu(EZu+O(n1/2))]+O(n1/2)=EZm+O(n1/2). (3.15)

    Now we consider the applicability of our theorems obtained so far. As other conditions can be trivially or similarly verified, here we mainly focus on the verification of condition (1.2).

    Example 1: Let the population X have a Cauchy distribution with a pdf f(y)=1π(1+y2),<y<+, correspondingly the inverse function of the cdf of X can be figured out to be

    G(x)=1tan(πx),0<x<1,

    satisfying

    limx0+G(x)x5(1x)5=limx1G(x)x5(1x)5=0.

    Example 2: For Xf(x)=1x(ln(x))2I[e,)(x), we have

    G(x)=e1x1I(0,1)(x),

    and

    limx0+G(x)x(1x)=limx1G(x)x(1x)=0.

    Example 3: For XN(0,12), on that occasion, f(y)=12πey22, f(y)=yf(y) and y=G(x)x=F(y)=y12πet22dt, therefore, as x0+, we have

    (G(x))2ln(x(1x))(G(x))2lnx=y2ln(F(y))y2yF(y)f(y)=2[(yF(y))(f(y))]=2[F(y)+yf(y)yf(y)]=2[F(y)yf(y)1].

    Noting that as x=F(y)0+ or equivalently y,

    F(y)yf(y)f(y)f(y)yf(y)=f(y)f(y)+y2f(y)=11+y20, (4.1)

    we have as x0+,

    (G(x))2ln(x(1x))2.

    By the same fashion, we can show as x1 that

    (G(x))2ln(x(1x))2.

    In conclusion, for x0+ as well as for x1,

    (G(x))22ln(x(1x)). (4.2)

    Accordingly, there exists a positive M>0 such that for all x(0,1),

    (G(x))2M|ln(x(1x))|=Mln(x(1x)). (4.3)

    No matter if x0+ or x1, we get

    |G(x)|=|f(y)f(y)+3(f(y))2(f(y))5|=|(y21)(f(y))2+3(yf(y))2(f(y))5|=2y2+1(f(y))3|y|2y2(f(y))3=2(G(x))2(f(G(x)))34ln(x(1x))(f(G(x)))3. (4.4)

    Here the last step holds in accordance to Eq (4.2).

    For x0+ as well as for x1,

    4ln(x(1x))(f(G(x)))3=4ln(x(1x))(12π)3exp(3(G(x))22)=4(2π)3[ln(x(1x))]exp(3(G(x))22)=4(2π)3[ln(x(1x))][exp((G(x))2)]344(2π)3[ln(x(1x))][exp(Mln(x(1x)))]34=4(2π)3[ln(x(1x))](x(1x))3M4. (4.5)

    Thus we can see the achievement of condition (1.2) by

    limx0+(G(x)xM(1x)M)=limx1(G(x)xM(1x)M)=0. (4.6)

    Remark 9. For a RV X with a cdf F(x) possessing an inverse function G(x), we can prove that if σ>0 and μ(,+) are constants, then the cdf of the RV σX+μ will have an inverse function σG(x)+μ. Thus for the general case XN(μ,σ2), we can still verify the condition (1.2).

    Example 4: For a population XU[a,b], G(x)=(ba)x+a is the inverse function of the cdf of X. As G(x)=0, the assumption of condition (1.2) holds.

    Generally, for any population distributed over an interval [a,b] according to a continuous pdf f(x), if G(0+) and G(1) exist, then the condition (1.2) holds.

    For length concern, here we only point out without detailed proof that for a population X according to a distribution such as Gamma distribution (including special cases such as the Exponential and the Chi-square distributions) and beta distribution and so on, the requirement of condition (1.2) can be satisfied.

    For a random sample (X1,...,Xn) derived from a population X which is uniformly distributed over the interval [0,1], the moment of the ith OS EXi:n=i/(n+1)p if i/np(0,1) as n. Let ai:n=i/n. According to conclusion (1.9) where f(xp)=1 and xp=p(0,1), we have for integer m2,

    E(Xi:nai:n)m=10(xin)mn!(i1)!(ni)!xi1(1x)nidx+o(nm/2)=EZm(p(1p))m2nm2+o(nm/2). (4.7)

    That results in

    n!10(nxi)mxi1(1x)nidx(i1)!(ni)!nm=EZm(pp2)m2nm2+o(nm/2), (4.8)

    or equivalently

    n!mj=0[(mj)nj(i)mjB(i+j,n+1i)](i1)!(ni)!nm=EZm(pp2)m2nm2+o(nm/2).

    Consequently we have the following equation

    n!mj=0[(mj)nj(i)mjΓ(i+j)Γ(n+1i)Γ(i+j+n+1i)](i1)!(ni)!nm=EZm(pp2)m2nm2+o(nm/2),

    which yields

    n!mj=0[(mj)nj(i)mj(i1+j)!(n+j)!](i1)!nm=EZm(pp2)m2nm2+o(nm/2). (4.9)

    As i/np(0,1) when n+, the above equation indicates that

    mj=0[(mj)nj(i)mj(i1+j)!(n+m)!(i1)!(n+j)!]n2m=EZm(pp2)m2nm2+o(nm/2). (4.10)

    For convenience sake, now we denote vk=u=0 and vk=u=1 if v<u. Noting for given explicit integers m2 and j{0,1,...,m} the expression

    (mj)nj(i)mj(i1+j)!(n+m)!(i1)!(n+j)!=(mj)(1)mj(imjjk=1[(i1)+k])(njmk=j+1(n+k)) (4.11)

    is a multinomial of i and n. We see that the nominator of the LHS of Eq (4.10) is also a multinomial which we now denote as

    mj=0[(mj)nj(i)mj(i1+j)!(n+m)!(i1)!(n+j)!]:=ms=0mt=0a(m)s,timsnmt.

    Equivalently, we derive

    mj=0{(mj)(1)mj[imjjk=1(i1+k)][njmk=j+1(n+k)]}=2mk=0s+t=ka(m)s,timsnmt.

    By Eq (4.10), we see for any given p(0,1), if i/np(0,1) as n+, then

    2mk=0s+t=ka(m)s,timsnmtn3m/2=EZm(p(1p))m2+o(1). (4.12)

    Noting that

    s+t=ka(m)s,timsnmt=(s+t=ka(m)s,tpms)n2mk+o(n2mk),

    we see in accordance to (4.12) that

    2mk=0[(s+t=ka(m)s,tpms)n2mk+o(n2mk)]n3m/2=EZm(pp2)m2+o(1). (4.13)

    That indicates that if a non-negative integer k satisfies 2mk>3m/2, or equivalently 0k<m/2, then the coefficient of n2mk in the nominator of LHS of Eq (4.13) must be zero for any given p(0,1), namely

    s+t=ka(m)s,tpms=0,s+t=k<m/2

    holds for any p(0,1). Thereby, for the case of non-negative integers s and t satisfying s+t=k<m/2, we see that the equation a(m)s,t=0 surely holds.

    It is funny to notice that for big m, we immediately have the following three corresponding equations

    mj=0(1)mj(mj)=0,
    mj=2(mj)(1)mjj(j1)2=0,

    and

    m1j=2(mj)(1)mjj(j1)2(mj)(m+j+1)2=0,

    according to the conclusions a(m)0,0=0, a(m)1,0=0 and a(m)1,1=0.

    As for the structure of a(m)s,t when s2, t1 and m>2(s+t), obviously s<mt holds on this occasion and the term a(m)s,timsnmt in the multinomial

    mj=0{(mj)(1)mj[imjjk=1(i1+k)][njmk=j+1(n+k)]}=mj=0{(mj)(1)mj[imjj1k=0(i+k)][njmk=j+1(n+k)]}=mj=0{(mj)(1)mj[imj+1j1k=1(i+k)][njmk=j+1(n+k)]}=(sj=0+mtj=s+1+mj=mt+1){(mj)(1)mj[imj+1j1k=1(i+k)][njmk=j+1(n+k)]}

    is also the term a(m)s,timsnmt in the multinomial

    mtj=s+1{(mj)(1)mj[imj+1j1k=1(i+k)][njmk=j+1(n+k)]}.

    Noting for given j{s,...,mt}, the monomial

    (1u1<u2<...<usj1u1u2...rs)ims

    is the term with degree ms in the polynomial of i

    [imj+1j1k=1(i+k)],

    while the monomial

    (j+1v1<v2<...<vtmv1v2...vt)nmt

    is the term with degree mt in the polynomial of n

    [njmk=j+1(n+k)],

    we see for s+t<m/2,

    a(m)s,t=mtj=s+1((mj)(1)mj1u1<...<usj1u1...usj+1v1<...<vtmv1...vt).

    Now that ams,t=0 holds provided s+t=k<m/2 according to Eq (4.13), we conclude the following Theorem.

    Theorem 5. If s, t and m are integers satisfying s2, t1 and m>2(s+t), then

    mtj=s+1((mj)(1)mj1u1<u2<...<usj1u1...usj+1v1<v2<...<vtmv1...vt)=0.

    Example 5: For big integer m, according to Theorem 5, we have a(m)2,1=0 and a(m)2,2=0. Correspondingly, we obtain equations

    m1j=3((mj)(1)mj(j1i=1i)2(j1i=1i2)2(m+j+1)(mj)2)=0,

    and

    m2j=3((mj)(1)mj(j1i=1i)2(j1i=1i2)2(mi=j+1i)2(mi=j+1i2)2)=0.

    Both equations can be verified by the aid of Maple software.

    Let real δ>0 and integer m>0 be given. For a population satisfying condition (1.1), no matter if the population has an expectation or not, the moment of Xδi:n exists and the sequence {EXδi:n,n1} converges for large i and n satisfying i/np(0,1). Under some further trivial assumptions, for large integer n the mth moment of the standardized sequence {Xi:n,n1} can be approximated by the mth moment of a standard normal distribution EZm.

    Due to the fact that the existence requirement of some expectation Xs:j in Theorem 3 has always been hard to be verified for a population without an expectation, for a long time, real-life world data corresponding to that population of interest has been unavailable in the vast majority of references. Now that the alternative condition (1.8) is presented, maybe things will improve in the future and we still have a long way to go.

    This work was supported by the Science and Technology Plan Projects of Jiangxi Provincial Education Department, grant number GJJ180891.

    There exists no conflict of interest between authors.



    [1] R. R. Bahadur, A note on quantiles in large samples, Ann. Math. Stat., 37 (1966), 577–580.
    [2] P. Y. Thomas, N. V. Sreekumar, Estimation of location and scale parameters of a distribution by U-statistics based on best linear functions of order statistics, J. Stat. Plan. Infer., 138 (2008), 2190–2200. https://doi.org/10.1016/j.jspi.2007.09.004 doi: 10.1016/j.jspi.2007.09.004
    [3] J. L. Wang, C. S. Deng, J. F. Li, M. F. Zhou, On variances and covariances of a kind of extreme order statistics, Commun. Stat.-Theory Methods, 45 (2016), 3274–3282. https://doi.org/10.1080/03610926.2014.901373 doi: 10.1080/03610926.2014.901373
    [4] G. Stoops, D. Barr, Moments of certain Cauchy order statistics, Am. Stat., 25 (1971), 51. https://doi.org/10.1080/00031305.1971.10477307 doi: 10.1080/00031305.1971.10477307
    [5] B. V. Bahr, On the convergence of moments in the central limit theorem, Ann. Math. Stat., 36 (1965), 808–818. https://doi.org/10.1214/aoms/1177700055 doi: 10.1214/aoms/1177700055
    [6] P. J. Bickel, Some contributions to the theory of order statistics, Fifth Berkeley symposium, 1 (1967), 575–591.
    [7] R. D. Reiss, Other important approximations, In: Approximate distributions of order statistics, New York: Springer, 1989. https://doi.org/10.1007/978-1-4613-9620-8_7
    [8] P. K. Sen, On some properties of the asymptotic variance of the sample quantiles and mid-ranges, J. R. Stat. Soc. B, 23(1961), 453–459.
    [9] M. Mahdizadeh, E. Zamanzade, Goodness-of-fit testing for the Cauchy distribution with application to financial modeling, J. King Saud Univ. Sci., 31 (2016), 1167–1174. https://doi.org/10.1016/j.jksus.2019.01.015 doi: 10.1016/j.jksus.2019.01.015
    [10] B. Cory, R. Binod, C. Sher, A new generalized cauchy distribution with an application to annual one day maximum rainfall data, Stat., Optim. Inf. Comput., 9 (2021), 123–136. http://doi.org/10.19139/soic-2310-5070-1000 doi: 10.19139/soic-2310-5070-1000
    [11] J. Shao, Probability theory, In: Mathematical statistics, New York: Springer, 2003. http://doi.org/10.1007/b97553_1
    [12] P. Billingsley, Weak convergence in metric spaces, In: Convergence of probability measures, New York: John Wiley & Sons, 1999, 7–79. https://doi.org/10.1002/9780470316962.ch1
    [13] V. A. Zorich, Integrals depending on a parameter, In: Mathematical analysis II, Berlin, Heidelberg: Springer, 2016,405–492. https://doi.org/10.1007/978-3-662-48993-2_9
  • This article has been cited by:

    1. Jinliang Wang, Fang Wang, Songbo Hu, On asymptotic correlation coefficient for some order statistics, 2023, 8, 2473-6988, 6763, 10.3934/math.2023344
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1481) PDF downloads(55) Cited by(1)

Figures and Tables

Tables(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog