In this comprehensive study, we delve deeply into the concept of multivariate total positivity, defining it in accordance with a direction. We rigorously explore numerous salient properties, shedding light on the nuances that characterize this notion. Furthermore, our research extends to establishing distinct forms of dependence among the order statistics of a sample from a distribution function. Our analysis aims to provide a nuanced understanding of the interrelationships within multivariate total positivity and its implications for statistical analysis and probability theory.
Citation: Enrique de Amo, José Juan Quesada-Molina, Manuel Úbeda-Flores. Total positivity and dependence of order statistics[J]. AIMS Mathematics, 2023, 8(12): 30717-30730. doi: 10.3934/math.20231570
[1] | Mingxia Yang . Orderings of the second-largest order statistic with modified proportional reversed hazard rate samples. AIMS Mathematics, 2025, 10(1): 311-337. doi: 10.3934/math.2025015 |
[2] | Xiao Zhang, Rongfang Yan . Stochastic comparisons of extreme order statistic from dependent and heterogeneous lower-truncated Weibull variables under Archimedean copula. AIMS Mathematics, 2022, 7(4): 6852-6875. doi: 10.3934/math.2022381 |
[3] | Miaomiao Zhang, Bin Lu, Rongfang Yan . Ordering results of extreme order statistics from dependent and heterogeneous modified proportional (reversed) hazard variables. AIMS Mathematics, 2021, 6(1): 584-606. doi: 10.3934/math.2021036 |
[4] | Bin Lu, Rongfang Yan . Ordering results of second order statistics from random and non-random number of random variables with Archimedean copulas. AIMS Mathematics, 2021, 6(6): 6390-6405. doi: 10.3934/math.2021375 |
[5] | H. M. Barakat, M. A. Alawady, I. A. Husseiny, M. Nagy, A. H. Mansi, M. O. Mohamed . Bivariate Epanechnikov-exponential distribution: statistical properties, reliability measures, and applications to computer science data. AIMS Mathematics, 2024, 9(11): 32299-32327. doi: 10.3934/math.20241550 |
[6] | Areej M. AL-Zaydi . On concomitants of generalized order statistics arising from bivariate generalized Weibull distribution and its application in estimation. AIMS Mathematics, 2024, 9(8): 22002-22021. doi: 10.3934/math.20241069 |
[7] | Abdelilah Hakim, Anouar Ben-Loghfyry . A total variable-order variation model for image denoising. AIMS Mathematics, 2019, 4(5): 1320-1335. doi: 10.3934/math.2019.5.1320 |
[8] | Huiming Zhang, Hengzhen Huang . Concentration for multiplier empirical processes with dependent weights. AIMS Mathematics, 2023, 8(12): 28738-28752. doi: 10.3934/math.20231471 |
[9] | Dojin Kim, Lee-Chae Jang, Seongook Heo, Patcharee Wongsason . Note on fuzzifying probability density function and its properties. AIMS Mathematics, 2023, 8(7): 15486-15498. doi: 10.3934/math.2023790 |
[10] | Haroon Barakat, Osama Khaled, Hadeer Ghonem . Predicting future order statistics with random sample size. AIMS Mathematics, 2021, 6(5): 5133-5147. doi: 10.3934/math.2021304 |
In this comprehensive study, we delve deeply into the concept of multivariate total positivity, defining it in accordance with a direction. We rigorously explore numerous salient properties, shedding light on the nuances that characterize this notion. Furthermore, our research extends to establishing distinct forms of dependence among the order statistics of a sample from a distribution function. Our analysis aims to provide a nuanced understanding of the interrelationships within multivariate total positivity and its implications for statistical analysis and probability theory.
There are different ways to discuss dependence relations among random variables and, as Jogdeo [1] notes: "...this is one of the most widely studied objects in probability and statistics."
Recent literature extensively studied the concept of dependence in bivariate and multivariate settings. These concepts are particularly relevant in fields such as economics, insurance, finance, risk management, applied probability and statistics (see, e.g., [2]). Several definitions of positive dependence have been introduced to model the association between large values of a component of a multivariate random vector and large values of the other components —further discussion of most of the dependence concepts that we present in this paper can be consulted in [3,4,5,6,7,8,9]— including multivariate total positivity of order 2 (MTP2) —also known as positive likelihood ratio dependence for the bivariate case. This concept has garnered significant attention, particularly in Gaussian models, owing to its intuitive description that highlights the non-negativity of all correlations and partial correlations. In finance, psychometrics, machine learning, medical statistics and phylogenetics, MTP2 models have been shown to outperform state-of-the-art methods; moreover, there is a fundamental connection between the MTP2 constraint and the assumption of sparsity —see, e.g., [10,11].
However, not all dependence concepts, especially in the multivariate case, encompass all dependencies between random variables. In particular, the above mentioned MTP2 concept is defined for the case when the random vector is (X1,X2,…,Xn), but not when at least one of the variables is negative: For instance, a random vector of type (−X1,X2,…,Xn). Thus, we intend to extend the concept to random variables that follow, for example, the guidelines of the random vector presented.
On the other hand, the i-th order statistic of a sample from a distribution functions is equal to its i-th smallest value. Together with rank statistics, order statistics are among the most fundamental tools in non-parametric statistics and inference [12]. We establish certain types of dependence —both for some of those previously defined and some well-known that we will recall— for order statistics.
The paper is organized as follows. In Section 2, we provide the major definitions. In Section 3, we concentrate on the notion of multivariate total positivity according to a direction and provide several properties. In Section 4, we establish certain types of dependence for order statistics. Finally, Section 5 is devoted to conclusions.
Let d≥2 be a natural number. Let (Ω,F,ℙ) be a probability space, where Ω is a nonempty set, F is a σ-algebra of subsets of Ω, and ℙ is a probability measure on F, and let X=(X1,…,Xd) be a vector of independent and identically distributed (i.i.d. for short) random variables from Ω to ¯ℙd=[−∞,∞]d.
A function f:¯R2⟶[0,+∞[ is totally positive of order two —denoted by TP2— if
f(x′,y′)f(x,y)≥f(x′,y)f(x,y′) |
whenever x≤x′ and y≤y′ [5].
Two random variables X and Y are said to be totally positive of order two —or positively likelihood ratio dependent, denoted by PLRD(X,Y)— if the density function of the pair (X,Y) is TP2.
In the multivariate case, a generalization of total positivity of order two can be defined [13]. A function f:¯Rd⟶[0,+∞[ is multivariate totally positive of order two —denoted by MTP2— if
f(x∨y)f(x∧y)≥f(x)f(y) |
for all x=(x1,…,xd),y=(y1,…,yd)∈¯Rd, where
x∨y=(max(x1,y1),…,max(xd,yd)),x∧y=(min(x1,y1),…,min(xd,yd)). |
A random vector X=(X1,…,Xd) is said to be multivariate totally positive of order two —or multivariate positively likelihood ratio dependent— if its joint d-dimensional density f is MTP2. See [14] for examples and [15,16] for applications.
We note that by reversing the sense of the above inequalities, we have the corresponding negative concepts, obtaining similar results which we omit here.
In the next sections, when we talk about these —or other— dependence concepts, we will refer to random variables or to their joint density functions.
In this section, we provide a simple generalization of the TP2 concept in higher dimensions in a directional sense, in which a pair of the components of the random vector can be negative. After giving some simple properties of this concept, we provide a "natural" generalization of the MTP2 concept according to a direction —i.e., the components of the random vector can take negative values— and show that the two newly defined concepts are equivalent. Additional characterizations and properties are given throughout the section.
The next definition generalizes the concept of TP2 to d-dimensions according to a direction.
Definition 1. Let X be a d-dimensional random vector with joint density f, and let α=(α1,…,αd)∈Rd such that |αi|=1 for all i=1,…,d. The function f is said to be multivariate totally positive of order two in pairs according to the direction α —denoted by MTPP2(α)— if
f(x1,…,αixi,…,αjxj,…,xd)f(x1,…,αix′i,…,αjx′j,…,xd)≥f(x1,…,αix′i,…,αjxj,…,xd)f(x1,…,αixi,…,αjx′j,…,xd) | (3.1) |
for all (x1,…,xd,x′i,x′j)∈¯Rd+2 such that xi≤x′i and xj≤x′j and any election of (i,j).
Note that if a random pair (X1,X2) is TP2 then it is MTPP2(1,1).
The dependency in MTPP2(α), whichever is α, is positive since the fact that a d-dimensional random vector X is MTPP2(α) indicates that large values of the random variables Xj, with j∈J, correspond with small values of the variables Xj, with j∈I∖J, where I={1,…,d} and J={i∈I:αi>0}. In addition, there is also association between small values of the variables Xj, with j∈J, and large values of Xj, with j∈I∖J, as the following result shows.
Proposition 1. A d-dimensional random vector X is MTPP2(α) if, and only if, it is MTPP2(−α).
Proof. Assume X is MTPP2(α), then (3.1) holds. By making the changes xi=−y′i, xj=−y′j, x′i=−yi and x′j=−yj —note that yi≤y′i and yj≤y′j— we easily obtain that X is also MTPP2(−α).
The converse follows the same steps.
The proof of the next property concerning the MTPP2(α) concept —in which 1 denotes the vector (1,1,…,1)∈Rd— is simple, and we omit it.
Proposition 2. A d-dimensional random vector X is MTPP2(α) if, and only if, αX is MTPP2(1).
In the following definition we provide a generalization of the MTP2-concept, similarly to the generalization defined for the orthant dependence given in [17], and where αz will denote the d-dimensional vector (α1z1,…,αdzd).
Definition 2. Let X be a d-dimensional random vector with joint density function f, and α∈¯Rd, with |αi|=1 for all i=1,…,d. Then X is said to be multivariate totally positive of order two according to the direction α —denoted by MTP2(α)— if
f(α(x∨y))f(α(x∧y))≥f(αx)f(αy) |
for all x=(x1,…,xd),y=(y1,…,yd)∈¯Rd.
In this section, we have defined two multivariate generalizations of the TP2-concept. Now, we prove that both concepts, MTPP2(α) and MTP2(α), are equivalent, as the next result shows.
Theorem 3. A d-dimensional random vector X is MTP2(α) if, and only if, it is MTPP2(α).
Proof. Consider the vectors
x=(x1,…,x′i,…,xj,…,xd)andy=(x1,…,xi,…,x′j,…,xd), |
with xi≤x′i and xj≤x′j. Then we have
α(x∧y)=(α1x1,…,αixi,…,αjxj,…,αdxd)α(x∨y)=(α1x1,…,αix′i,…,αjx′j,…,αdxd). |
If X is MTP2(α), then we immediately obtain that X is MTPP2(α).
Conversely, for x,y∈¯Rd suppose, without loss of generality, xl≥yl for l=1,…,r and xl≤yl for l=r+1,…,d. Let s=d−r. For each i, with 0≤i≤r, and for each j, with 0≤j≤s, we define the vectors
zi,j:=(x1∨y1,…,xi∨yi,xi+1∧yi+1,…,xr∧yr,xr+1∨yr+1,…,xr+j∨yr+j,xr+j+1∧yr+j+1,…,xd∧yd) |
(note zr,0=x, z0,s=y, z0,0=x∧y and zr,s=x∨y). Then we have zi+1,j∧zi,j+1=zi,j and zi+1,j∨zi,j+1=zi+1,j+1. Since X is MTPP2(α), if h is the joint density function of X, we obtain
h(αzi,j)h(αzi+1,j+1)≥h(αzi+1,j)h(αzi,j+1) |
for any (i,j) such that 0≤i≤r−1 and 0≤j≤s−1; therefore,
r−1∏i=0s−1∏j=0h(αzi,j)h(αzi+1,j+1)≥r−1∏i=0s−1∏j=0h(αzi+1,j)h(αzi,j+1), |
whence
h(αz0,0)h(αzr,s)≥h(αzr,0)h(αz0,s), |
i.e., X is MTP2(α), which completes the proof.
Another different characterization of the MTP2(α)-concept is given in the following result.
Proposition 4. A d-dimensional random vector X with joint density function h is MTP2(α) if, and only if, for any pair of vectors x,x′∈¯Rd such that xi≤x′i for all i=1,…,d, and any 1≤j≤d−1, we have
h(αx)h(αx′)≥h(αx′j)h(αxj), | (3.2) |
where x′j=(x′1,…,x′j,xj+1,…,xd) and xj=(x1,…,xj,x′j+1,…,x′d).
Proof. Assume X is MTP2(α). Let x,x′∈¯Rd such that xi≤x′i for all i=1,…,d. For any 1≤j≤d−1 consider the vectors x′j and xj. Then, it is clear x′j∧xj=x and x′j∨xj=x′, whence we obtain (3.2).
Conversely, given i,j∈I={1,…,d}, let k=max{i,j}. Then, for any x,x′∈¯Rd such that xl≤x′l for all l=1,…,d, from (3.2) we have
h(αx)h(αx′)≥h(αx′k−1)h(αxk−1). |
Taking xl=x′l for all l∈I∖{i,j} we easily obtain that X is MTPP2(α), and therefore it is MTP2(α), completing the proof.
For the next result, in which we provide another characterization of the MTPP2(α)-concept of a random vector, we need some additional notation: Given a d-dimensional random vector X with joint distribution function H, let ¯H(x1,…,xd) denote the probability that X is greater than x —also known as the joint survival function of H—, i.e.,
¯H(x1,…,xd)=ℙ[d⋂i=1(Xi>xi)]. |
Proposition 5. A d-dimensional random vector X is MTPP2(α) if, and only if, ¯H is MTPP2(1).
Proof. Assume X is MTPP2(α). Since
¯H(x1,…,xd)=ℙ[d⋂i=1(αiXi>xi)], |
we have the following chain of equalities:
ℙ[d⋂i=1(αiXi>xi)]ℙ[d⋂i=1(αiXi>x′i)]−ℙ[j⋂i=1(αiXi>xi),d⋂i=j+1(αiXi>x′i)]ℙ[j⋂i=1(αiXi>x′i),d⋂i=j+1(αiXi>xi)]=ℙ[j⋂i=1(xi<αiXi≤x′i),d⋂i=j+1(αiXi>xi)]ℙ[d⋂i=1(αiXi>x′i)]−ℙ[j⋂i=1(xi<αiXi≤x′i),d⋂i=j+1(αiXi>xi)]ℙ[j⋂i=1(αiXi>x′i),d⋂i=j+1(αiXi>xi)]=ℙ[d⋂i=1(xi<αiXi≤x′i)]ℙ[d⋂i=1(αiXi>x′i)]−ℙ[j⋂i=1(xi<αiXi≤x′i),d⋂i=j+1(αiXi>xi)]ℙ[j⋂i=1(αiXi>x′i),d⋂i=j+1(xi<αiXi≤x′i)]. | (3.3) |
Now, we prove that the last expression in (3.3) is non-negative. For that, note that, from Proposition 2, we have that the random vector αX is MTPP2(1), and from Proposition 4 we have
g(y)g(y′)−g(y′j)g(yj)≥0, | (3.4) |
where g is the joint density function of αX. Integrating in both sides of (3.4), with xi<yi≤x′i<y′i for all i=1,…,d, we have
∫x′1x1⋯∫x′dxd∫∞x′1⋯∫∞x′d(g(y)g(y′)−g(y′j)g(yj))dydy′≥0, |
so that the expression in (3.3) is non-negative.
The converse follows the same steps, and the proof is completed.
Next we show that any subset of a MTPP2(α) random vector preserves this property.
Proposition 6. If X=(X1,…,Xd) is a MTPP2(α) random vector, then any subset (Xi1,…,Xik) of X is MTPP2(α∗), where α∗=(αi1,…,αik).
Proof. Let d≥3 be a natural number, i∈{1,…,d}, α(i)=(α1,…,αi−1,αi+1,…,αd) and X(i) the (d−1)-dimensional random vector (X1,…,Xi−1,Xi+1,…,Xd). Let g(x1,…,xd) (respectively, g(i)(x1,…,xi−1,xi+1,…,xd)) be the joint density function of the random vector αX (respectively, α(i)X(i)). We now prove that the random vector α(i)X(i) is MTPP2(1d−1), where 1d−1 denotes the unitary d−1-vector, and, from Proposition 2, we would have that X(i) is MTPP2(α(i)). Continuing this reasoning for a determined number of components, the result would be proved.
Let j,k∈{1,…,d} such that j<i<k. For the sake of simplicity, let us denote
g(i)(xj,xk,x(j,k)):=g(i)(x1,…,xj,…,xi−1,xi+1,…,xk,…,xd) |
and
g(xj,xi,xk,x(j,i,k)):=g(x1,…,xj,…,xi,…,xk,…,xd). |
Therefore, we need to prove
g(i)(xj,xk,x(j,k))g(i)(x′j,x′k,x(j,k))≥g(i)(x′j,xk,x(j,k))g(i)(xj,x′k,x(j,k)) |
for any xj,xk,x′j,x′k∈¯R such that xj≤x′j and xk≤x′k and any x(j,k)∈¯Rd−3.
We have
g(i)(xj,xk,x(j,k))g(i)(x′j,x′k,x(j,k))−g(i)(x′j,xk,x(j,k))g(i)(xj,x′k,x(j,k))=∫∫g(x′j,x′i,x′k,x(j,i,k))g(x′j,x′i,xk,x(j,i,k))g(xj,xi,xk,x(j,i,k))g(x′j,x′i,xk,x(j,i,k))dxidx′i−∫∫g(xj,x′i,x′k,x(j,i,k))g(xj,x′i,xk,x(j,i,k))g(x′j,xi,xk,x(j,i,k))g(xj,x′i,xk,x(j,i,k))dxidx′i=∫∫xi<x′i(g(x′j,x′i,x′k,x(j,i,k))g(x′j,x′i,xk,x(j,i,k))−g(xj,xi,x′k,x(j,i,k))g(xj,xi,xk,x(j,i,k)))⋅(g(xj,xi,xk,x(j,i,k))g(x′j,x′i,xk,x(j,i,k))−g(x′j,xi,xk,x(j,i,k))g(xj,x′i,xk,x(j,i,k)))dxidx′i+∫∫xi<x′i[g(x′j,x′i,x′k,x(j,i,k))g(x′j,x′i,xk,x(j,i,k))−g(xj,x′i,x′k,x(j,i,k))g(xj,x′i,xk,x(j,i,k))+g(x′j,xi,x′k,x(j,i,k))g(x′j,xi,xk,x(j,i,k))−g(xj,xi,x′k,x(j,i,k))g(xj,xi,xk,x(j,i,k))]⋅g(x′j,xi,xk,x(j,i,k))g(xj,x′i,xk,x(j,i,k))dxidx′i≥0 |
since αX is MTPP2(1) —recall Proposition 2— and
g(xj,xi,xk,x(j,i,k))g(x′j,x′i,xk,x(j,i,k))−g(x′j,xi,xk,x(j,i,k))g(xj,x′i,xk,x(j,i,k))≥0, |
g(x′j,x′i,x′k,x(j,i,k))g(x′j,x′i,xk,x(j,i,k))≥g(xj,x′i,x′k,x(j,i,k))g(xj,x′i,xk,x(j,i,k))≥g(xj,xi,x′k,x(j,i,k))g(xj,xi,xk,x(j,i,k)), |
g(x′j,x′i,x′k,x(j,i,k))g(x′j,x′i,xk,x(j,i,k))≥g(xj,x′i,x′k,x(j,i,k))g(xj,x′i,xk,x(j,i,k))andg(x′j,xi,x′k,x(j,i,k))g(x′j,xi,xk,x(j,i,k))≥g(xj,xi,x′k,x(j,i,k))g(xj,xi,xk,x(j,i,k)); |
whence the result follows.
For the next result, we will use some additional notation. For α=(α1,…,αd1)∈¯Rd1 and β=(β1,…,βd2)∈¯Rd2, (α,β) will denote concatenation, i.e., (α,β)=(α1,…,αd1,β1,…,βd2)∈¯Rd1+d2; and similarly for random vectors.
Proposition 7. If the respective d1- and d2-dimensional random vectors X and Y are MTPP2(α) and MTPP2(β), and X and Y are independent, then the (d1+d2)-random vector (X,Y) is MTPP2(α,β).
Proof. Since the random vectors X and Y are independent, so are the variables αX and βY. Denoting by f(x), g(y) and h(x,y) the respective joint density functions of αX, βY and (αX,βY), we have h(x,y)=f(x)g(y); whence,
h(x,y)h(x1,…,x′i,…,xd1,y1,…,y′j,…,yd2)−h(x1,…,x′i,…,xd1,y1,…,yj,…,yd2)h(x1,…,xi,…,xd1,y1,…,y′j,…,yd2)=0 |
for any choice (i,j), with 1≤i≤d1 and 1≤j≤d2, such that xi≤x′i and yj≤y′j. If the two indices chosen are from the first d1 indices (we have a similar reasoning for the last d2 indices), since X is MTPP2(α), or equivalently, αX is MTPP2(1) —recall Proposition 2—, we have
h(x,y)h(x1,…,x′i,…,x′j,…,xd1,y1,…,yd2)−h(x1,…,x′i,…,xj,…,xd1,y1,…,yd2)h(x1,…,xi,…,x′j,…,xd1,y1,…,yd2)=g(y)[f(x)f(x1,…,x′i,…,x′j,…,xd1)−f(x1,…,x′i,…,xj,…,xd1)f(x1,…,xi,…,x′j,…,xd1)]≥0; |
therefore, (X,Y) is MTPP2(α,β), which completes the proof.
We conclude this section with three additional properties of the MTPP2(α)-concept. The first property is straightforward and we omit its proof.
Proposition 8. Every independent d-dimensional random vector X is MTPP2(α) for any α∈¯Rd.
Proposition 9. If the random vector X=(X1,…,Xd) is MTPP2(α) and f1,…,fd are d real-valued and non-decreasing functions, then the random vector (f1(X1),…,fd(Xd)) is MTPP2(α).
Proof. Let f (respectively, g) be the joint density function of αX (respectively, (α1f1(X1),…,αdfd(Xd))). Then it holds
g(x1…,xd)=f(α1f−11(α1x1),…,αdf−1d(αdxd)). |
Since X is MTPP2(α), then αX is MTPP2(1) —recall Proposition 2—, and since αkf−1k(αkxk)≤αkf−1k(αkx′k) for k=i,j with xk≤x′k, then (α1f1(X1),…,αdfd(Xd)) is MTPP2(1), and thus (f1(X1),…,fd(Xd)) is MTPP2(α).
Proposition 10. Let X1,…,Xd,Y be d+1 random variables such that X1…,Xd are independent given Y. If the random pair (Xi,Y) is MTPP2(αi,1) with |αi|=1 for all i=1…,d, then the random vector (X1,…,Xd) is MTPP2(α1,…,αd).
Proof. Let fi(xi,y) the joint density function of the random pair (Xi,Y) for i=1,…,d, and let g(y) the density function of Y. Then, the joint density function of the random vector (X1,…,Xd), which we denote by f, is given by
f(x1,…,xd)=∫d∏i=1fi(xi,y)g(y)dy. |
Given i,j∈{1,…,d} and x1,…,xd,x′i,x′j∈¯Rd with xi≤x′i and xj≤x′j, we have
f(x1,…,αixi,…,αjxj,…,xd)f(x1,…,αix′i,…,αjx′j,…,xd)−f(x1,…,αix′i,…,αjxj,…,xd)f(x1,…,αixi,…,αjx′j,…,xd)=∫∫d∏k=1i≠k≠j(fk(xk,y)fk(xk,y′))fi(αixi,y)fi(αix′i,y′)fj(αjxj,y)fj(αjx′j,y′)g(y)g(y′)dydy′−∫∫d∏k=1i≠k≠j(fk(xk,y)fk(xk,y′))fi(αix′i,y)fi(αixi,y′)fj(αjxj,y)fj(αjx′j,y′)g(y)g(y′)dydy′=∫∫y<y′d∏k=1i≠k≠j(fk(xk,y)fk(xk,y′))fi(αixi,y)fi(αix′i,y′)fj(αjxj,y)fj(αjx′j,y′)g(y)g(y′)dydy′+∫∫y>y′d∏k=1i≠k≠j(fk(xk,y)fk(xk,y′))fi(αixi,y)fi(αix′i,y′)fj(αjxj,y)fj(αjx′j,y′)g(y)g(y′)dydy′−∫∫y<y′d∏k=1i≠k≠j(fk(xk,y)fk(xk,y′))fi(αix′i,y)fi(αixi,y′)fj(αjxj,y)fj(αjx′j,y′)g(y)g(y′)dydy′−∫∫y>y′d∏k=1i≠k≠j(fk(xk,y)fk(xk,y′))fi(αix′i,y)fi(αixi,y′)fj(αjxj,y)fj(αjx′j,y′)g(y)g(y′)dydy′=∫∫y<y′d∏k=1i≠k≠j(fk(xk,y)fk(xk,y′))(fi(αixi,y)fi(αix′i,y′)fj(αjxj,y)fj(αjx′j,y′)+fi(αixi,y′)fi(αix′i,y)fj(αjxj,y′)fj(αjx′j,y)−fi(αix′i,y)fi(αixi,y′)fj(αjxj,y)fj(αjx′j,y′)−fi(αix′i,y′)fi(αixi,y)fj(αjxj,y′)fj(αjx′j,y))g(y)g(y′)dydy′=∫∫y<y′d∏k=1i≠k≠j(fk(xk,y)fk(xk,y′))(fi(αixi,y)fi(αix′i,y′)−fi(αix′i,y)fi(αixi,y′))⋅(fj(αjxj,y)fj(αjx′j,y′)−fj(αjx′j,y)fj(αjxj,y′))g(y)g(y′)dydy′≥0 |
since (Xl,Y) is MTPP2(αl,1) for l=i,j; therefore, X is MTPP2(α), which completes the proof.
In this section, we establish certain types of dependence —both for some of those previously defined and some known ones that we recall in the section— for order statistics. We begin by recalling some concepts related to order statistics. We refer to [12,18] and the references therein for an overview.
Let X1,X2,…,Xd be independent and identically distributed random variables, with density function f and distribution function F, and X(i) denotes the i-th order statistic, being X(1)=min{X1,X2,…,Xd} and X(d)=max{X1,X2,…,Xd}. Observe that, for 1≤i<j≤d, the joint density function of the random pair (X(i),X(j)) is given by
hi,j(xi,xj)={mi,j[F(xi)]i−1[F(xj)−F(xi)]j−i−1[1−F(xj)]d−jf(xi)f(xj),if xi≤xj,0,otherwise, | (4.1) |
where
mi,j=d!(i−1)!(j−i−1)!(d−j)!, |
and the joint density function of the random vector (X(1),X(2),…,X(i)), with 2≤i≤d, is given by
h1,2,…,i(x1,x2,…,xi)={d!(d−i)![1−F(xi)]d−ii∏k=1f(xk),if x1≤x2≤⋯≤xi,0,otherwise. | (4.2) |
Now we recall several known dependence concepts: see [19,20,21] for more details and applications.
Definition 3. Let X be a random variable with distribution function F. Then F is said to be decreasing failure rate —denoted by DFR— if ℙ[X>x+y|X>x] is a nondecreasing function of x for all y≥0.
We note that in Definition 3, denoting ¯F(x)=1−F(x), we have that F is decreasing failure rate if ¯F(x+y)¯F(x) is non-decreasing in x for all y≥0.
Definition 4. ([19]) Let X and Y be random variables. Y is positive regression dependent in X —denoted by PRD(Y|X)— if ℙ[Y>y|X=x] is a nondecreasing function of x for all y≥0.
A generalization of the PRD-concept for d random variables is the following.
Definition 5. The random variables X1,X2,…,Xd are conditionally increasing in sequence —denoted by CIS— if ℙ[Xi>xi|X1=x1,…,Xi−1=xi−1] is nondecreasing in x1,…,xi−1, for every 2≤i≤d, and for all xi∈¯R.
In what follows, we provide some results on dependence concepts described in this section and the previous one for order statistics.
Proposition 11. Let d be a natural number such that d≥2. For 1≤i≤d, let X(i) be the i-th order statistics of a statistical sample of size d from a DFR distribution function F, with density function f. Then, for every (i,j) such that 1≤i<j≤d, we have PRD(X(j)−X(i)|X(i)).
Proof. We have to prove that ℙ[X(j)−X(i)>y|X(i)=x] is nondecreasing in x for all y≥0. For that, note
ℙ[X(j)−X(i)>y|X(i)=x]=∫+∞x+y(d−i)!(j−i−1)!(d−j)![F(z)−F(x)]j−i−1[1−F(z)]d−j[1−F(x)]d−if(z)dz=(d−i)!(j−i−1)!(d−j)!∫+∞x+y[1−¯F(z)¯F(x)]j−i−1[¯F(z)¯F(x)]d−jf(z)¯F(x)dz. |
With the change of variable u=¯F(z)¯F(x) we get
ℙ[X(j)−X(i)>y|X(i)=x]=(d−i)!(j−i−1)!(d−j)!∫¯F(x+y)¯F(x)0ud−j(1−u)j−i−1du=ℙ[Z≤¯F(x+y)¯F(x)], |
where Z is the random variable with Beta distribution β(d−j+1,j−i). Since F is DFR, then given x,x′ in R, with x<x′, we have
¯F(x+y)¯F(x)≤¯F(x′+y)¯F(x′) |
for all y≥0. Therefore, it is easy to conclude PRD(X(j)−X(i)|X(i)).
Proposition 12. For k=1,2,…,d, let X(k) be the k−th order statistic of a statistical sample of size d from a distribution function F. Then we have PLRD(X(i),X(j)) for every (i,j) such that 1≤i,j≤d.
Proof. Assume —without loss of generality— i<j. Let hi,j be the joint density function of the random vector (X(i),X(j)) given by (4.1). For any x,x′,y,y′∈¯R such that x≤x′,y≤y′, we have
hi,j(x′,y′)hi,j(x,y)−hi,j(x′,y)hi,j(x,y′)=(d!(i−1)!(j−i−1)!(d−j)!)2[F(x)]i−1[F(x′)]i−1×[1−F(y)]d−j[1−F(y′)]d−jf(x)f(x′)f(y)f(y′)×{[(F(y)−F(x))(F(y′)−F(x′))]j−i−1−[(F(y′)−F(x))(F(y)−F(x′))]j−i−1}, |
whence the proof reduces to proving
(F(y)−F(x))(F(y′)−F(x′))≥(F(y′)−F(x))(F(y)−F(x′)) |
which, it reduces in turn to
(F(x′)−F(x))(F(y′)−F(y))≥0; |
but this obviously holds since x≤x′ and y≤y′, which completes the proof.
Proposition 13. The order statistics X(1),X(2),...,X(d), of an statistical sample of size d from a distribution function F, are always conditionally increasing in sequence (CIS).
Proof. Let 2≤i≤d. Since the joint density function of (X(1),X(2),…,X(i)) is given by (4.2), then we have
ℙ[X(i)>x|i−1⋂j=1(X(j)=xj)]=∫+∞xh1,2,…,i(x1,x2,…,xi)h1,2,…,i−1(x1,x2,…,xi−1)dxi={[1−F(x)1−F(xi−1)]d−i+1,if x≥xi−1,1,if x<xi−1. |
Therefore, for every xi−1,x′i−1∈¯R, with xi−1≤x′i−1, since 1−F(xi−1)≥1−F(x′i−1), we have
(1−F(x)1−F(xi−1))d−i+1≤(1−F(x)1−F(x′i−1))d−i+1, |
whence it easily follows the result.
Proposition 14. For 1≤i≤d, let X(i) be the i-th order statistic of a statistical sample of size d from a DFR distribution F. Let D1=X(1) and Di=X(i)−X(i−1) for i=2,…,d. Then the random variables D(1),D(2),...,D(d) are CIS.
Proof. Given 2≤i≤d, let gi denote the joint density function of (D1,D2,…,Di). Then we have
gi(x1,x2,…,xi)=h1,2…,i(y1,y2,…,yi), |
where h1,2,…,i is the density of (X(1),X(2),…,X(i)) given by (4.2), y1=x1 and yj=∑jk=1xk for j=2,…,i. Therefore, for x≥0 we get
ℙ[Di>x|i−1⋂j=1(Dj=xj)]=∫+∞xh1,2,…,i(x1,x1+x2,…,∑ij=1xj)h1,2,…,i−1(x1,x1+x2,…,∑i−1j=1xj)dxi=[1−F(∑i−1j=1xj+x)1−F(∑i−1j=1xj)]d−i+1, |
and ℙ[Di>x|⋂i−1j=1(Dj=xj)]=1, when x<0.
Since F is DFR, given x1,x2,…,xi−1,x′1,x′2,…,x′i−1∈¯R such that xj≤x′j for all j=1,2,…,i−1, we have ∑i−1j=1xj≤∑i−1j=1x′j and thus
¯F(∑i−1j=1xj+x)¯F(∑i−1j=1xj)≤¯F(∑i−1j=1x′j+x)¯F(∑i−1j=1x′j), |
and hence
(1−F(∑i−1j=1xj+x)1−F(∑i−1j=1xj))d−i+1≤(1−F(∑i−1j=1x′j+x)1−F(∑i−1j=1x′j))d−i+1, |
whence the result follows.
We have defined two multivariate generalizations of the TP2-concept: Total positivity according to a direction and in pairs. The equivalence of these two multivariate dependence concepts and several key properties have been established. Moreover, specific dependencies among the order statistics of a sample from a distribution function have been identified. Future work will explore additional dependence concepts in multivariate settings.
The authors declare that they have not used Artificial Intelligence (AI) tools in the creation of this article.
The authors acknowledge the comments of the reviewers, the support of the program ERDF-Andalucía 2014-2020 (University of Almería) under research project UAL2020-AGR-B1783 and project PID2021-122657OB-I00 by the Ministry of Science and Innovation (Spain). The first author is also partially supported by the CDTIME of the University of Almería.
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
[1] | K. Jogdeo, Concepts of dependence, in Encyclopedia of Statistical Sciences, vol. 1, (eds. S. Kotz and N.L. Johnson) New York: Wiley, 1982,324–334. |
[2] | P. Čížek, R. Weron, W. Härdle, Statistical Tools for Finance and Insurance, Berlin: Springer, 2005. http://dx.doi.org/10.1007/978-3-642-18062-0 |
[3] | R. E. Barlow, F. Proschan, Statistical Theory of Reliability and Life Testing: Probability Models, Silver Spring, MD: To Begin With, 1981. |
[4] | H. Joe, Multivariate Models and Dependence Concepts, London: Chapman & Hall, 1997. http://dx.doi.org/10.1201/9780367803896 |
[5] | S. Karlin, Total Positivity, Stanford, CA: Stanford University Press, 1968. |
[6] |
M. L. T. Lee, Dependence by total positivity, Ann. Probab., 13 (1985), 572–582. http://dx.doi.org/10.1214/aop/1176993010 doi: 10.1214/aop/1176993010
![]() |
[7] | E. L. Lehmann, Some concepts of dependence, Ann. Math. Statist., 37 (1966), 1137–1153. |
[8] |
M. Shaked, A family of concepts of dependence for bivariate distributions, J. Amer. Statist. Assoc., 72 (1977), 642–650. http://dx.doi.org/10.1080/01621459.1977.10480628 doi: 10.1080/01621459.1977.10480628
![]() |
[9] | Y. L. Tong, Probability Inequalities in Multivariate Distributions, New York: Academic Press, 1980. |
[10] |
S. Karlin, Y. Rinott, M-Matrices as covariance matrices of multinormal distributions, Linear Algebra Appl., 52 (1983), 419–438. http://dx.doi.org/10.1016/0024-3795(83)80027-5 doi: 10.1016/0024-3795(83)80027-5
![]() |
[11] |
M. Slawski, M. Hein, Estimation of positive definite M-matrices and structure learning for attractive Gaussian Markov random field, Linear Algebra Appl., 473 (2015), 145–179. http://dx.doi.org/10.1016/j.laa.2014.04.020 doi: 10.1016/j.laa.2014.04.020
![]() |
[12] | H. A. David, H. N. Nagaraja, Order Statistics, 3 Eds. Hoboken, NJ: John Wiley & Sons, Inc., 2003. http://dx.doi.org/10.1002/0471722162 |
[13] |
S. Karlin, Y. Rinott, Classes of orderings of measures and related correlation inequalities. I. Multivariate totally positive distributions, J. Multivariate Anal., 10 (1980), 467–498. http://dx.doi.org/10.1016/0047-259X(80)90065-2 doi: 10.1016/0047-259X(80)90065-2
![]() |
[14] | A. M. Marshall, I. Olkin, Multivariate distributions generated from mixtures of convolution and product families, in Topics in Statistical Dependence, (eds. H.W. Block, A.R. Sampson and T.H. Savits) Institute of Mathematical Statistics, Lecture Notes - Monograph Series, vol. 16, 1990,371–393. |
[15] | S. Karlin, Y. Rinott, Total positivity properties of absolute value multinormal variables with applications to confidence interval estimates and related probabilistic inequalities, Ann. Stat., 9 (1981), 1035–1049. |
[16] |
J. G. Propp, D. B. Wilson, Exact sampling with coupled Markov chains and applications to statistical mechanics, Random Struct. Algor., 9 (1996), 223–252. http://dx.doi.org/10.1002/(SICI)1098-2418(199608/09)9:1/2<223::AID-RSA14>3.0.CO;2-O doi: 10.1002/(SICI)1098-2418(199608/09)9:1/2<223::AID-RSA14>3.0.CO;2-O
![]() |
[17] |
J. J. Quesada-Molina, M. Úbeda-Flores, Directional dependence of random vectors, Inf. Sci., 215 (2012), 67–74. http://dx.doi.org/10.1016/j.ins.2012.05.019 doi: 10.1016/j.ins.2012.05.019
![]() |
[18] | B. C. Arnold, N. Balakrishnan, H. N. Nagaraja, A first Course in Order Statistics, Philadelphia: SIAM, 2008. http://dx.doi.org/10.1137/1.9780898719062.fm |
[19] |
A. Colangelo, A. Müller, M. Scarsini, Positive dependence and weak convergence, J. Appl. Prob., 43 (2006), 48–59. http://dx.doi.org/10.1239/jap/1143936242 doi: 10.1239/jap/1143936242
![]() |
[20] |
G. Nappo, F. Spizzichino, Relations between ageing and dependence for exchangeable lifetimes with an extension for the IFRA/DFRA property, Depend. Model., 8 (2020), 1–33. http://dx.doi.org/10.1515/demo-2020-0001 doi: 10.1515/demo-2020-0001
![]() |
[21] | Z. Wei, T. Wang, W. Panichkitkosolkul, Dependence and association concepts through copulas, in Modeling Dependence in Econometrics, (eds. V. N. Huynh, V. Kreinovich and S. Sriboonchitta) Advances in Intelligent Systems and Computing, vol. 251, 2014,113–126. http://dx.doi.org/10.1007/978-3-319-03395-2_7 |
1. | José Juan Quesada-Molina, Manuel Úbeda-Flores, Monotonic Random Variables According to a Direction, 2024, 13, 2075-1680, 275, 10.3390/axioms13040275 | |
2. | José Juan Quesada-Molina, Manuel Úbeda-Flores, Modeling Directional Monotonicity in Sequence with Copulas, 2024, 13, 2075-1680, 785, 10.3390/axioms13110785 | |
3. | Mohamed Kayid, Multivariate quantile inactivity time with medical applications, 2024, 14, 2158-3226, 10.1063/5.0243891 |