Processing math: 21%
Research article

Analysis of the generalized progressive hybrid censoring from Burr Type-Ⅻ lifetime model

  • Received: 12 April 2021 Accepted: 22 June 2021 Published: 28 June 2021
  • MSC : 62G30, 62F15

  • In this paper, we use the generalized progressive hybrid censoring sample from the Burr Type-Ⅻ distribution to estimate the unknown parameters, reliability and hazard functions. We apply the maximum likelihood (ML) and the Bayesian estimation under different prior distributions and different loss functions; namely; are the squared error, Linex and general entropy. Also, we construct the classical and credible intervals of the unknown parameters as well as for the survival and hazard functions. In addition, we investigate the performance of the point estimation by using the mean square error (MSE) and expected bias (EB) and performance of the interval estimation using the average length and coverage probability. Further, we develop the Bayesian one- and two- samples Bayesan prediction for the non-observed failures in the progressive censoring. In order to show the performance and usefulness of the inferential procedures, we carry out some simulation experiments using MCMC Algorithm for the Bayesian approach based on different prior distributions. Finally, we apply the theatrical finding to some real life data set.

    Citation: Magdy Nagy, Khalaf S. Sultan, Mahmoud H. Abu-Moussa. Analysis of the generalized progressive hybrid censoring from Burr Type-Ⅻ lifetime model[J]. AIMS Mathematics, 2021, 6(9): 9675-9704. doi: 10.3934/math.2021564

    Related Papers:

    [1] Hassan Okasha, Mazen Nassar, Saeed A. Dobbah . E-Bayesian estimation of Burr Type XII model based on adaptive Type-Ⅱ progressive hybrid censored data. AIMS Mathematics, 2021, 6(4): 4173-4196. doi: 10.3934/math.2021247
    [2] Bing Long, Zaifu Jiang . Estimation and prediction for two-parameter Pareto distribution based on progressively double Type-II hybrid censored data. AIMS Mathematics, 2023, 8(7): 15332-15351. doi: 10.3934/math.2023784
    [3] Xue Hu, Haiping Ren . Statistical inference of the stress-strength reliability for inverse Weibull distribution under an adaptive progressive type-Ⅱ censored sample. AIMS Mathematics, 2023, 8(12): 28465-28487. doi: 10.3934/math.20231457
    [4] Ahmed R. El-Saeed, Ahmed T. Ramadan, Najwan Alsadat, Hanan Alohali, Ahlam H. Tolba . Analysis of progressive Type-Ⅱ censoring schemes for generalized power unit half-logistic geometric distribution. AIMS Mathematics, 2023, 8(12): 30846-30874. doi: 10.3934/math.20231577
    [5] Nora Nader, Dina A. Ramadan, Hanan Haj Ahmad, M. A. El-Damcese, B. S. El-Desouky . Optimizing analgesic pain relief time analysis through Bayesian and non-Bayesian approaches to new right truncated Fréchet-inverted Weibull distribution. AIMS Mathematics, 2023, 8(12): 31217-31245. doi: 10.3934/math.20231598
    [6] Amal S. Hassan, Najwan Alsadat, Oluwafemi Samson Balogun, Baria A. Helmy . Bayesian and non-Bayesian estimation of some entropy measures for a Weibull distribution. AIMS Mathematics, 2024, 9(11): 32646-32673. doi: 10.3934/math.20241563
    [7] Abdulhakim A. Al-Babtain, Amal S. Hassan, Ahmed N. Zaky, Ibrahim Elbatal, Mohammed Elgarhy . Dynamic cumulative residual Rényi entropy for Lomax distribution: Bayesian and non-Bayesian methods. AIMS Mathematics, 2021, 6(4): 3889-3914. doi: 10.3934/math.2021231
    [8] Mohamed S. Eliwa, Essam A. Ahmed . Reliability analysis of constant partially accelerated life tests under progressive first failure type-II censored data from Lomax model: EM and MCMC algorithms. AIMS Mathematics, 2023, 8(1): 29-60. doi: 10.3934/math.2023002
    [9] Hanan Haj Ahmad, Ehab M. Almetwally, Dina A. Ramadan . A comparative inference on reliability estimation for a multi-component stress-strength model under power Lomax distribution with applications. AIMS Mathematics, 2022, 7(10): 18050-18079. doi: 10.3934/math.2022994
    [10] Baria A. Helmy, Amal S. Hassan, Ahmed K. El-Kholy, Rashad A. R. Bantan, Mohammed Elgarhy . Analysis of information measures using generalized type-Ⅰ hybrid censored data. AIMS Mathematics, 2023, 8(9): 20283-20304. doi: 10.3934/math.20231034
  • In this paper, we use the generalized progressive hybrid censoring sample from the Burr Type-Ⅻ distribution to estimate the unknown parameters, reliability and hazard functions. We apply the maximum likelihood (ML) and the Bayesian estimation under different prior distributions and different loss functions; namely; are the squared error, Linex and general entropy. Also, we construct the classical and credible intervals of the unknown parameters as well as for the survival and hazard functions. In addition, we investigate the performance of the point estimation by using the mean square error (MSE) and expected bias (EB) and performance of the interval estimation using the average length and coverage probability. Further, we develop the Bayesian one- and two- samples Bayesan prediction for the non-observed failures in the progressive censoring. In order to show the performance and usefulness of the inferential procedures, we carry out some simulation experiments using MCMC Algorithm for the Bayesian approach based on different prior distributions. Finally, we apply the theatrical finding to some real life data set.



    Especially in reliability analysis and survival analysis, the scheme of progressive Type-Ⅱ censoring has been used most commonly. It is preferable than the classical Type-Ⅱ censoring scheme. In several real-life areas, progressive censorship is beneficial, including, industrial, life research and clinical settings applications. It permits the removal of the experimental units surviving until the test finishes. Assume an experiment of lifetime testing with n units are placed on the test and is not desirable to detect all failure times under the limitation of cost and time, so only part of failures of the units are observed, such sample is called censored sample. Only m(m<n) units of failure times can be observed in progressive censoring schemes. At the occurrence of the first failure, R1 of the n1 surviving units are randomly selected and removed from the test and at the second observed failure, R2 of the nR12 surviving units are randomly selected and removed and so on. Finally, at the time of mth failure, the experiment will stop, then the reset survived units Rm=nR1Rm1m are excluded from the test. The censoring sizes {Ri,i=1,..,m1} are prefixed. We shall show the m ordered times of failure thus observed by X1:m:n,,Xm:m:n. It is clear that n=m+mk=1Rk. These ordered failure times which are detected from this form of censoring are called progressively Type-Ⅱ censored order statistics. Various authors have researched the order statistics and progressive Type-Ⅱ censoring features of such a life test that is progressively censored. Some primary referred works are Balakrishnan and Aggarwala [1], Balakrishnan [2], Cramer and Iliopoulos [3], Raqab et al.[4], Mohie El-Din and Shafay [5], and Balakrishnan and Cohen [6].

    The drawbacks of the progressive censorship scheme of Type-Ⅱ are that if the units are highly reliable, the experiment time can be very long. So, works of Kundu and Joarder [7] and Childs et al. [8] treat this problem by proposing a new type of censoring at which the stopping time of the experiment is min{Xm:m:n,T}, where T(0,) is pre-fixed beforehand. This type called progressive hybrid censoring scheme (PHCS). The total time of the experiment under PHCS will not exceed T. Several authors have studied the PHCS, see for instance the works Lin et al. in [9] and [10], Hemmati and Khorram in [11], and Moihe El-Din et al. [12].

    The downside of the PHCS, on the other hand, is that it can not be implemented when so less failures can be detected before T. Due this cause, Cho et al. [13] proposed a general type of censoring called generalized PHCS in which the lower number of failures are pre-determined. The experiment of life-testing would save time and the expense of failures on the basis of this censoring scheme. Furthermore, due to further failures of experiment, statistical efficiency estimates are improved. The following section explains the comprehensive designation of the generalized PHCS and its advantages. One of the important special cases of the generalized PHCS is the adaptive progressive censoring. This type is the first special case of the generalized PHCS as will be shown later in Section 2. For recent work on this topic, see for example, Moihe El-Din et al. [14], Mohie El-Din et al. [15], Abu-Moussa et al. [16], Lee et al. in [17] and Parviz and Panahi in [18].

    The contribution in this paper is that we consider the developing of the inference techniques for Burr Type-Ⅻ data based on the generalized PHCS, which is not considered in the literature. The Burr Type-Ⅻ distribution has the following density function (PDF) and distribution function (CDF), respectively given by

    f(x|α,β)=αβxβ1(1+xβ)(α+1),x>0,α>0,β>0, (1.1)
    F(x|α,β)=1(1+xβ)α,x>0,α>0,β>0. (1.2)

    The survival and hazard functions are given, respectively, by

    R(x|α,β)=ˉF(x|α,β)=1F(x|α,β),  h(x|α,β)=αβxβ1(1+xβ)1,x>0,α>0,β>0. (1.3)

    The Bayesian estimate ˆθBS relative to the squared error loss function is given by the mean of the posterior distribution

    ˆθBS=Eθ|x_[θ] (1.4)

    Assuming that the minimum loss exists at ˆθ=θ, it is possible to express the LINEX loss function as

    LBL(ˆθ,θ)=exp[υ(ˆθθ)]υ(ˆθθ)1, υ0, (1.5)

    The sign and magnitude of the υ shape parameter are the directions and degrees of asymmetry. It is easily seen that the (unique) Bayesian estimator of θ, denoted by ˆθredBL under the LINEX loss function, the value ˆθBL which minimizes Eθ|X_[LBL(ˆθ,θ)] is given by

    ˆθBL=1υln{Eθ|x_[exp(υθ)]}, (1.6)

    given that the expectation involved Eθ|x_[exp(υθ)] is finite. Calabria and Pulcini [19] have addressed the issue of selecting the value of the v parameter. The general entropy (GE) loss function is another widely used asymmetric loss function, it is given by

    LBE(ˆθ,θ)(ˆθθ)κκln(ˆθθ)1, (1.7)

    for κ>0, positive error has a more extreme impact than negative error, with negative errors being more severe than positive for κ<0. In this case, the Bayesian estimate ˆθBE relative to the GE loss function is given by

    ˆθBE={Eθ|x_[θ]κ}1κ. (1.8)

    provided that the involved expectation Eθ|x_[θ]κ is finite. It can be shown that, when κ=1, the Bayesian estimate in (1.8) coincides with the Bayesian estimate under the weighted squared error loss function. Similarly, when κ=1, the Bayesian estimate in (1.8) coincides with the Bayesian estimate under the SE loss function.

    The remainder of this paper is structured as follows: A summary of the model of generalized PHCS is provided in Section 2. Section 3 extracts the maximum likelihood (ML) estimates with their uniqueness property, while the Bayesian estimates for the unknown parameters, survival and hazard functions under three loss functions are derived in Section 4. Section 5 stems from the Bayesian single-sample prediction for all censoring stage failure times of all units withdrawn. While in Section 6 the Bayesian prediction for progressive order statistics from an unnoticed future sample of the same distribution. Simulation studies are conducted in section 7 for comparing the the efficiency of the proposed inferential techniques. In Section 8, data set is used for real life to demonstrate the theoretical findings. Finally, the paper is concluded in Section 9.

    Consider a life study in which n equivalent units are tested. Let us denote to the lifetimes coming from a distribution with CDF, F(x|α,β) and PDF, f(x|α,β) by X1,X2,...,Xn. The generalized PHCS is as follows: Let T>0 and k,m{1,2,...,n} are pre-fixed integers in which k<m with the pre-determined censoring scheme R=(R1,R2,...,Rm) satisfying n=m+R1++Rm. At the occurrence of first failure, R1 of the remaining units are eliminated randomly. At the occurrence of the second failure R2, of the surviving units are eliminated from the experiment and so on until the termination time T=max{Xk:m:n,min{Xm:m:n,T}} is reached, at this moment the reset surviving units are eliminated from the test. The "generalized PHCS" changes PHCS by permitting the experiment to proceed beyond T, if only a few failures up to T are observed. Ideally, the experimenters would like to observe m failures within this system, but they will detect at least k failure. Let D indicate the number of failures observed up to T (see Figure 1).

    Figure 1.  Schematic representation of generalized progressive hybrid censoring scheme.

    As mentioned above, one of the following types of observations is given under the generalized PHCS:

    1. Assume that the kth failure time is coming after T, then the termination of the experiment occurs at Xk:m:nand the observations are {X1:m:n<...<Xk:m:n}.

    2. Assume that T is reached after the kth failure and before the mth failure. In this case, the termination time is T and we will observe {X1:m:n<...<Xk:m:n<Xk+1:m:n<...<XD:m:n}.

    3. Assume that the mth failure detected after kth failure and before T, then the termination time is Xm:m:n and we will observe {X1:m:n<...<Xk:m:n<Xk+1:m:n<...<Xm:m:n}.

    Now, the joint probability density function based on the generalized PHCS for all cases are given by:

    fX_(x_)=[Di=1mj=i(Rj+1)]Di=1f(xi:D:n)[ˉF(xi:D:n)]Ri[ˉF(T)]Rτ, (2.1)

    where Rj is the jth value of the vector R,

    R={(R1,,RD,0,...,0,Rk=nkDj=1Rj),ifT<Xk:m:n<Xm:m:n,(R1,,RD),ifXk:m:nT<Xm:m:n,(R1,,Rm),ifXk:m:n<Xm:m:nT, (2.2)

    Rτ is the number of units that are removed at T, given by

    Rτ={0,ifT<Xk:m:n<Xm:m:n,nDDj=1Rj,ifXk:m:nT<Xm:m:n,0,ifXk:m:n<Xm:m:nT, (2.3)
    D={kifT<Xk:m:n<Xm:m:n,DifXk:m:nT<Xm:m:n,mifXk:m:n<Xm:m:nT, (2.4)

    $$ and

    x_={(x1:m:n,...,xk:m:n),ifT<Xk:m:n<Xm:m:n,(x1:m:n,...,xD:m:n),ifXk:m:nT<Xm:m:n,(x1:m:n,...,xm:m:n),ifXk:m:n<Xm:m:nT. (2.5)

    The likelihood function of α,β under the generalized PHCS can be derived using (1.2) and (1.1) in (2.1), as

    L(α,β|x_)=[Di=1mj=i(Rj+1)]αDβDDi=1xβ1i1+xβiexp[αW(β|x_)], (2.6)

    where W(β|x_)=Di=1(Ri+1)ln(1+xβi)+Rτln(1+Tβ) and xi=xi:D:n for simplicity of notation.

    The corresponding log-likelihood function is obtained from (2.6) as

    lnL(α,β|x_)=const.+D(lnα+lnβ)+(β1)Di=1lnxiDi=1ln(1+xβi)αW(β|x_), (3.1)

    equating the first derivatives of (3.1) with respect to β and α to zero, we get

    lnL(α,β|x_)α=DαW(β|x_)=0, (3.2)
    lnL(α,β|x_)β=Dβ+Di=1lnxiDi=1{α(Ri+1)+1}xβilnxi(1+xβi)αRτTβlnT(1+Tβ)=0, (3.3)

    ML estimates of the parameters α and β, ˆαML and ˆβML respectively, can be obtained by solving these two likelihood Eqs (3.2) and (3.3). We have employed the Newton-Raphson iteration method to evaluate ˆαMLand ˆβML. By using the invariance property the ML estimates of the corresponding survival and hazard functions are then given, respectively, by

    ˆRML(t)=(1+tˆβML)ˆαML, (3.4)
    ˆhML(t)=ˆαMLˆβMLtˆβML1(1+tˆβML)1. (3.5)

    The MLE estimate for α can be obtained in an explicit form depending on ˆβML from (3.2) as follows,

    ˆαML=DW(ˆβML|x_). (3.6)

    Then, ^αML is exist and unique if ^βML exist and unique. Now, substituting by (3.6) in (3.3), we get

    J(β)=Dβ+Di=1lnxi(1+xβi)DDi=1(Ri+1)xβilnxi(1+xβi)+RτTβlnT(1+Tβ)Di=1(Ri+1)ln(1+xβi)+Rτln(1+Tβ). (3.7)

    The MLE of β is obtained by solving the non-linear equation J(β)=0 in β. The question now, does the MLE of β is exist and unique? the answer of this question is related by the behavior of the function J(β). It is necessary to require at least two distinct observations xixj for some ij in the sample, to can estimate the parameters α and β jointly. The following theorem shows the requirements that is necessary for the existence and uniqueness of ˆβML.

    Theorem 3.1. Let x1x2...xD be the data sample with at least two distinct values, then the MLE of ˆβ, and hence for ˆα, exist and unique if and only if xi<1 for some i(1iD).

    Proof. The idea of the proof is to show that J(β) is a decreasing function with J(0)>0 and J()<0, which mean that J(β) has a unique root in (0,), therefore ˆα also is exist and unique.

    Now, after a straightforward algebraic manipulations, we can prove that

    J(β=0)= and J()=Di=1Ii(xi)lnxi, (3.8)

    where

    Ii(xi)={1if0<xi<1,0ifxi1. (3.9)

    Its obvious that, J()<0 if and only if xi<1 for some 1iD. Therefore, there exists at least one finite solution for the equation J(β)=0. Now, by showing that the function J(β) is monotone decreasing in β, by showing that its derivative J(β) is less than zero, then it follows ˆβML is exist and unique.

    For more details about the existence and uniqueness for the parameters of Burr Type-Ⅻ distribution in case of Type-Ⅱ censoring, see Wingo [20].

    Example 1. In this example, a real data set represent the time to failure (in months) for electronic components on a test. These data was reported in Wingo [20]. Wing assumed that Burr Type-Ⅻ distribution is used to fit these lifetime data. The test performed using 30 units but its terminated after the failure of 20 units. The data is given as follows,

    0.10.10.20.30.40.50.50.60.70.80.90.91.21.61.82.32.52.62.93.1

    Here, we use these data to generate a generalized PHCS using the following steps, as follows

    1. Case-Ⅰ Let T=1,k=15,m=20 and n=30, then (T<Xk<Xm) and the termination time is xk=1.8. The failure times are x={0.1,0.1,0.2,0.3,0.4,0.5,0.5,0.6,0.7,0.8,0.9,0.9,1.2,1.6,1.8} with censoring scheme R={0,1,0,0,2,0,0,0,3,0,0,0,0,0,9} and RT=0.

    2. Case-Ⅱ Let T=2.55,k=15,m=20 and n=30, then (Xk<T<Xm) and the termination time is T. The failure times are x={0.1,0.1,0.2,0.3,0.4,0.5,0.5,0.6,0.7,0.8,0.9,0.9,1.2,1.6,1.8,2.3,2.5} with censoring scheme R={0,1,0,0,2,0,0,0,3,0,0,0,0,2,0,0,0} and RT=5.

    3. Case-Ⅲ Let T=3.5,k=15,m=20 and n=30, then (Xk<Xm<T) and the termination time is Xm=3.1. The failure times are x={0.1,0.1,0.2,0.3,0.4,0.5,0.5,0.6,0.7,0.8,0.9,0.9,1.2,1.6,1.8,2.3,2.5,2.6,2.9,3.1}, with censoring scheme R={0,1,0,0,2,0,0,0,3,0,0,0,0,2,0,0,0,0,0,2} and RT=0.

    Figure 2 shows that the function J(β) is a negative function in all cases, while Figure 3 shows that the function J(β) is a monotone decreasing function with only one root for J(β)=0.

    Figure 2.  The graph of J(β) for (a) Case-Ⅰ, (b) Case-Ⅱ, and (c) Case-Ⅲ.
    Figure 3.  The graph of J(β) for (a) Case-Ⅰ, (b) Case-Ⅱ and (c) Case-Ⅲ.

    This example shows that the MLE of β is exist and unique and hence α, where ˆαML equals 0.763076,0.774599 and 0.853238 for Case-Ⅰ, Case-Ⅱ and Case-Ⅲ, respectively.

    For large D, the observed Fisher information matrix of the parameters α and β is given by

    I(ˆα,ˆβ)=[2lnL(α,β|x_)α22lnL(α,β|x_)αβ2lnL(α,β|x_)βα2lnL(α,β|x_)β2](ˆαML,ˆβML) (3.10)

    where

    2lnL(α,β|x_)α2=Dα2,
    2lnL(α,β|x_)β2=Dβ2Di=1[α(Ri+1)+1][(lnxi)2xβi(1+xβi)2],
    2lnL(α,β|x_)αβ=[Di=1(Ri+1)xβilnxi(1+xβi)],

    and a 100(1γ)% two-sided approximate confidence intervals for the parameters α and β are then

    (ˆαzγ/2V(ˆα),ˆα+zγ/2V(ˆα)), (3.11)

    and

    (ˆβzγ/2V(ˆβ),ˆβ+zγ/2V(ˆβ)), (3.12)

    respectively, where V(ˆα) and V(ˆβ)are the estimated variances of ˆαML and ˆβML, which are given by the first and the second, diagonal element of I1(ˆα,ˆβ), and zγ/2 is the upper (γ/2) percentile of standard normal distribution.

    Greene [21] used the delta method to construct the approximate confidence intervals for the survival and hazard functions depending in the MLEs. This method is used here in this subsection for calculating the linear approximation of that function, and then calculated the variance of the simpler linear function that can be used for large sample inference, see Greene [21] and Agresti [22].

    G1=[R(t)αR(t)β,]andG1=[h(t)αh(t)β] (3.13)

    where

    R(t)α=(1+tβ)αln(1+tβ),R(t)β=αtβ(1+tβ)(α+1)ln(t),
    h(t)α=βtβ1(1+tβ)1,

    and

    h(t)β=α{(1+tβ)[tβ1+βtβ1ln(t)]βt2β1ln(t)}(1+tβ)2,

    Then, the approximate estimates of V(ˆR(t)) and V(ˆh(t))are given, respectively, by

    V(ˆR(t))[Gt1I1(α,β)G1](ˆαML,ˆβML),V(ˆh(t))[Gt2I1(α,β)G2](ˆαML,ˆβML),

    where Gti is the tranpose of Gi, i=1,2. These results yields the approximate confidence intervals for R(t) and h(t) as:

    (ˆR(t)zγ/2V(ˆR(t)),ˆR(t)+zγ/2V(ˆR(t))), (3.14)

    and

    (ˆh(t)zγ/2V(ˆh(t)),ˆh(t)+zγ/2V(ˆh(t))). (3.15)

    Under the assumption that both parameters α and β are unknown, we may consider the joint prior density function of α and β which was suggested by Al-Hussaini and Jaheen [23] given by

    π(α,β)  αa1βa+c1exp(βb)exp[α(βd)], (4.1)

    where a, b, c, d are non-negative constants. If the hyper parameters a,b,c, and d are chosen to be equal zero, then the informative priors are reduced to the non-informative priors.

    Upon combining (2.6) and (4.1), given the generalized PHCS, the posterior density function of α,β is obtained as

    π(α,β|x_)=L(α,β|x_)π(α,β)/00L(α,β|x_)π(α,β)dαdβ=I1αD+a1βD+a+c1exp(βb)(Di=1xβ1i1+xβi)exp{α[W(β|x_)+βd]}, (4.2)

    where

    I=00αD+a1βD+a+c1exp(βb)(Di=1xβ1i1+xβi)exp{α[W(β|x_)+βd]}dαdβ=Γ(D+a)0βD+a+c1exp(βb)(Di=1xβ1i1+xβi)[W(β|x_)+βd](D+a)dβ. (4.3)

    Hence, from (1.4), the Bayesian estimates of α and β under the squared error loss function are obtained, respectively, as

    ˆαBS=I1Γ(D+a+1)0βD+a+c1(Di=1xβ1i1+xβi)×exp(βb)[W(β;x_)+βd](D+a+1)dβ, (4.4)
    ˆβBS=I1Γ(D+a)0βD+a+c(Di=1xβ1i1+xβi)×exp(βb)[W(β;x_)+βd](D+a)dβ. (4.5)

    From (1.6), the Bayesian estimator of α and β under the LINEX loss function are obtained, respectively, as

    ˆαBL=1υln{I1Γ(D+a)0βD+a+c1(Di=1xβ1i1+xβi)×exp(βb)[W(β;x_)+υ+βd](D+a)dβ}, (4.6)
    ˆβBL=1υln{I1Γ(D+a)0βD+a+c1(Di=1xβ1i1+xβi)×exp(β(b+υ))[W(β;x_)+βd](D+a)dβ}. (4.7)

    From (1.8), the Bayesian estimator of α and β under the GE loss function are obtained, respectively, as

    ˆαBE={I1Γ(D+aκ)0βD+a+c1(Di=1xβ1i1+xβi)×exp(βb)[W(β;x_)+βd](D+aκ)dβ}1κ, (4.8)
    ˆβBE={I1Γ(D+a)0βD+a+cκ1(Di=1xβ1i1+xβi)×exp(βb)[W(β;x_)+βd](D+a)dβ}1κ. (4.9)

    Since the integrals in (4.4), (4.5), (4.6), (4.7), (4.8) and (4.9) can't be computed analytically, so the Markov chain Monte Carlo (MCMC) method are used for evaluating these integrals. Now, depending on the posterior distribution in (4.2), the conditional posterior distributions π1(α|β;x_) and π2(β|α;x_) of the parameters α and β can be computed and written, respectively, as

    π1(α|β;x_)=[W(β;x_)+βd](D+a)Γ(D+a)αD+a1exp{α[W(β;x_)+βd]}=GammaDistribution[D+a,(W(β;x_)+βd)], (4.10)

    and

    π2(β|α;x_)=βD+a+c1exp(βb)(Di=1xβ1i1+xβi)exp{α[W(β;x_)+βd]}. (4.11)

    Since, the conditional distribution of β in (4.11) is not a well-known distribution, the Metropolis-Hastings sampler is used to generate samples of β inside the MCMC algorithm; see Metropolis et al. [24]. MCMC algorithm (1) is used to generate samples of α and β from the conditional posterior distributions which will used for approximating the Bayes estimates of them.

    Algorithm 1 MCMC method.
    Step1 start with α(0)=ˆαML and β(0)=ˆβML
    Step2 set i=1
    Step3 Generate α(i)GammaDistribution[D+a,(W(β(i1);x_)+β(i1)d)]=π1(α|β(i1);x_)
    Step4 Generate a proposal β() from N(β(i1),V(β))
    Step5 Calculate the acceptance probabilities dβ=min[1,π2(β()|α(i1))π1(β(i1)|α(i1))]
    Step6 Generate u1 follow a Uniform(0, 1) distribution, if u_1\leq d_{\beta} , set \beta^{(i)}=\beta^{(*)} , else set \beta^{(i)}=\beta^{(i-1)}
    \bf{Step7} set i=i+1 , repeat steps 3 to 7, N times and obtain \left(\alpha^{(j)}, \beta^{(j)}\right) , j=1, 2, ..., N.
    \bf{Step8} Remove the first B values for \alpha and \beta , which is the burn-in period of \alpha^{(j)} and \beta^{(j)} where j=1, 2, ..., N-B .

     | Show Table
    DownLoad: CSV

    Assume g \left(\alpha, \beta\right) is any function in \alpha and \beta , then the Bayesian estimates of g using the MCMC values, are obtained as follows:

    based on SEL , the Bayesian estimate of g is given by,

    \begin{equation} \hat{g(\alpha,\beta)}_{BS} = \frac{1}{N-B}\sum\limits_{i = 1}^{N-B}g(\alpha^{(i)},\beta^{(i)}), \end{equation} (4.12)

    Based on the LINEX loss function,

    \begin{equation} \hat{g(\alpha,\beta)}_{BL} = \frac{-1}{\upsilon}Ln\left[ \frac{1}{N-B}\sum\limits_{i = 1}^{N-B}e^{\upsilon g(\alpha^{(i)},\beta^{(i)})}\right], \end{equation} (4.13)

    For the GEL function, the Bayesian estimate is given by,

    \begin{equation} \hat{g(\alpha,\beta)}_{BE} = \left[ \frac{1}{N-B}\sum\limits_{i = 1}^{N-B} [g(\alpha^{(i)},\beta^{(i)})]^{-\kappa}\right]^{-1/\kappa}, \end{equation} (4.14)

    The 100(1-\gamma)\% Bayesian confidence intervals or credible intervals \left(L, U\right) for parameter \theta ( \theta is \alpha or \beta ) if

    \begin{equation} \int\limits_{L}^{U}\pi ^{\ast }(\theta |\underline{\bf{x}})d\theta = 1-\gamma , \end{equation} (4.15)

    Since the integration in (4.15) can't be solved analytically, so the 100(1-\gamma) MCMC approximate credible intervals for \alpha and \beta using the (N-B) generated values after sorting it in an ascending order, \left(\alpha^{(1)}, \alpha^{(2)}, ..., \alpha^{(N-B)}\right) and \left(\beta^{(1)}, \beta^{(2)}, ..., \beta^{(N-B)}\right) , are given as follows,

    \begin{eqnarray} &&\left( \alpha_{[((N-B)\gamma/2)]}, \alpha_{[((N-B)(1-\gamma/2))]}\right) \\ &&\left(\beta_{[((N-B)\gamma/2)]},\beta_{[((N-B)(1-\gamma/2))]}\right) \end{eqnarray} (4.16)

    and the lengths of the credible intervals are the absolute difference between the lower and the upper bounds.

    For \rho = 1, 2, ..., R_{j}^{\ast } , let X_{\rho :R_{j}^{\ast }} denote the \rho ^{th} order statistic out of R_{j}^{\ast } removed units at stage j . Then, the conditional density function of X_{\rho :R_{j}^{\ast }} , given the observed generalized PHCS, is given, see Basak et al.[25], by

    \begin{equation} f(X_{\rho :R_{j}^{\ast }}|\underline{\bf{x}}) = f(x|\underline{\bf{x}} ) = \frac{R_{j}^{\ast }!}{(\rho -1)!(R_{j}^{\ast }-\rho )!}\frac{\left[ F(x)-F(x_{j})\right] ^{\rho -1}\left[ 1-F(x)\right] ^{R_{j}^{\ast }-\rho }f(x)}{\left[ 1-F(x_{j})\right] ^{R_{j}^{\ast }}},\ \ x > x_{j}, \end{equation} (5.1)

    where

    \begin{equation*} j = \left\{ \begin{array}{ll} 1,...,k & \;{\rm{if }}\;T < X_{k:m:n} < X_{m:m:n,} \\ 1,...,D,\tau & \;{\rm{if }}\;X_{k:m:n} < T < X_{m:m:n}, \\ 1,...,m & \;{\rm{if }}\;X_{k:m:n} < X_{m:m:n} < T, \end{array} \right. \end{equation*}

    with x_{\tau } = T .

    By using (1.2) and (1.1) in (5.1), given generalized PHCS, the conditional density function of X_{\rho :R_{j}^{\ast }} is then given as follows:

    \begin{equation} f(x|\underline{\bf{x}}) = \sum\limits_{q = 0}^{\rho -1}C_{q}\frac{\alpha \beta x^{\beta -1}}{1+x^{\beta }}\exp \left\{ -\alpha \left[ \varpi _{q}\ln \left( \frac{1+x^{\beta }}{1+x_{j}^{\beta }}\right) \right] \right\} ,\ \ x > x_{j}, \end{equation} (5.2)

    where C_{q} = \frac{(-1)^{q}\binom{\rho -1}{q}R_{j}^{\ast }!}{(\rho -1)!(R_{j}^{\ast }-\rho)!} and \varpi _{q} = q+R_{j}^{\ast }-\rho +1 for q = 0, ..., \rho -1.

    Upon combining (4.2), (5.2) and using the MCMC technique, then the Bayesian predictive density function of X_{\rho :R_{j}^{\ast }} , given generalized PHCS, is obtained as

    \begin{eqnarray} f^{\ast }(x|\underline{\bf{x}}) & = &\underset{0}{\overset{\infty }{\int }} \underset{0}{\overset{\infty }{\int }}f(x|\underline{\bf{x}})\pi ^{\ast }(\alpha ,\beta |\underline{\bf{x}})d\alpha d\beta \\ & = &\frac{1}{N-B}\sum\limits_{i = 1}^{N-B}\sum\limits_{q = 0}^{\rho -1}C_{q}\frac{{\alpha}^{(i)} {\beta}^{(i)} x^{{\beta}^{(i)} -1}}{1+x^{{\beta}^{(i)} }}\exp \left\{ -{\alpha}^{(i)} \left[ \varpi _{q}\ln \left( \frac{1+x^{{\beta}^{(i)} }}{1+x_{j}^{{\beta}^{(i)} }}\right) \right] \right\}. \end{eqnarray} (5.3)

    The Bayesian predictive survival function of X_{\rho :R_{j}^{\ast }} , given generalized PHCS, is given as

    \begin{eqnarray} \bar{F}^{\ast }(t|\underline{\bf{x}}) & = &\int\limits_{t}^{\infty }f^{\ast }(x|\underline{\bf{x}})dx \\ & = &\frac{1}{N-B}\sum\limits_{i = 1}^{N-B}\sum\limits_{q = 0}^{\rho -1}\frac{C_{q}}{\varpi_{q}}\left( \frac{1+t^{{\beta}^{(i)} }}{1+x_{j}^{{\beta}^{(i)} }}\right)^{-{\alpha}^{(i)}\varpi _{q}}. \end{eqnarray} (5.4)

    The Bayesian point predictor of X_{\rho :R_{j}^{\ast }} under the squared error loss function is the mean of the predictive density, given by

    \begin{eqnarray} \widehat{X}_{\rho :R_{j}^{\ast }}& = &\int\limits_{0}^{\infty }xf^{\ast }(x|\underline{\bf{x}})dx, \\ & = &\frac{1}{N-B}\sum\limits_{i = 1}^{N-B}\sum\limits_{q = 0}^{\rho -1}C_{q} {\alpha}^{(i)}\left(1+x_j^{{\beta}^{(i)}}\right)^{{\alpha}^{(i)}\varpi_{q}}\frac{\Gamma\left(1+\frac{1}{\beta^{(i)}}\right) \Gamma\left({\alpha}^{(i)}\varpi_{q}-\frac{1}{\beta^{(i)}}\right)}{\Gamma\left(1+{\alpha}^{(i)}\varpi_{q}\right)}, \end{eqnarray} (5.5)

    where f^{\ast }(x|\underline{\bf{x}}) is given as in (5.3). The Bayesian predictive bounds of 100(1-\gamma)\% two-sided equi-tailed (ET) interval for X_{\rho :R_{j}^{\ast }} can be obtained by solving the following two equations:

    \begin{equation} \bar{F}^{\ast }(L_{ET}|\underline{\bf{x}}) = \frac{\gamma }{2}\quad \quad {\rm{and}}\quad \quad \bar{F}^{\ast }(U_{ET}|\underline{\bf{x}}) = 1-\frac{ \gamma }{2}, \end{equation} (5.6)

    where \bar{F}^{\ast }(t|\underline{\bf{x}}) is given as in (5.4), and L_{ET} and U_{ET} denote the lower and upper bounds, respectively. On the other hand, for the highest posterior density (HPD) method, the following two equations need to be solved:

    \begin{equation*} \bar{F}^{\ast }(L_{HPD}|\underline{\bf{x}})-\bar{F}^{\ast }(U_{HPD}| \underline{\bf{x}}) = 1-\gamma, \end{equation*}

    and

    \begin{equation*} f^{\ast }(L_{HPD}|\underline{\bf{x}})-f^{\ast }(U_{HPD}|\underline{ \bf{x}}) = 0, \end{equation*}

    where f^{\ast }(x|\underline{\bf{x}}) is as in (5.3), and L_{HPD} and U_{HPD} denote the HPD lower and upper bounds, respectively.

    Let Y_{1:\ell :N}\leq Y_{2:\ell :N}\leq \ldots \leq Y_{\ell :\ell :N} be a future independent progressive Type-Ⅱ censored sample from the same population with censoring scheme S = (S_{1}, ..., S_{\ell }) . In this section, we develop a general procedure for deriving the point and interval predictions for Y_{s:\ell :N} , 1\leq s\leq \ell , based on the observed generalized PHCS. The marginal density function of Y_{s:\ell :N} is given by Balakrishnan et al. [26] as

    \begin{equation} f_{Y_{s:\ell :N}}(y_{s}|\alpha ) = c\left( N,s\right) \sum\limits_{q = 0}^{s-1}c_{q,s-1} {[1-F(y_{s})]}^{M_{q,s}-1}f(y_{s}), \end{equation} (6.1)

    where 1\leq s\leq \rho, c\left(N, s\right) = N\left(N-S_{1}-1\right)...\left(N-S_{1}...-S_{s-1}+1\right), \; M_{q, s} = N-S_{1}-...-S_{s-q-1}-s+q+1, \; and c_{q, s-1} = (-1)^{q}\left\{ \left[ \prod\limits_{u = 1}^{q}\sum\limits_{\upsilon = s-q}^{s-q+u-1}\left(S_{\upsilon }+1\right) \right] \left[ \prod\limits_{u = 1}^{s-q-1}\sum \limits_{\upsilon = u}^{s-q-1}\left(S_{\upsilon }+1\right) \right] \right\} ^{-1}.

    Upon substituting (1.2) and (1.1) in (6.1), the marginal density function of Y_{s:\ell :N} is then obtained as

    \begin{equation} f_{Y_{s:\ell :N}}(y_{s}|\alpha ) = c\left( N,s\right) \sum\limits_{q = 0}^{s-1}c_{q,s-1} \frac{\alpha \beta y_{s}^{\beta -1}}{1+y_{s}^{\beta }}\exp \left\{ -\alpha \left[ M_{q,s}\ln \left( 1+y_{s}^{\beta }\right) \right] \right\} ,\ \ y_{s} > 0. \end{equation} (6.2)

    Upon combining (4.2), (6.2) and using the MCMC method, given generalized PHCS, the Bayesian predictive density function of Y_{s:\ell :N} is obtained as

    \begin{eqnarray} f_{Y_{s:\ell :N}}^{\ast }(y_{s}|\underline{\bf{x}}) & = &\underset{0}{ \overset{\infty }{\int }}\underset{0}{\overset{\infty }{\int }}f_{Y_{s:\ell :N}}(y_{s}|\underline{\bf{x}})\pi ^{\ast }(\alpha ,\beta |\underline{ \bf{x}})d\alpha d\beta \\ & = &\frac{c\left( N,s\right)}{N-B}\sum\limits_{i = 1}^{N-B} \sum\limits_{q = 0}^{s-1}c_{q,s-1} \frac{\alpha^{(i)} \beta^{(i)} y_{s}^{\beta^{(i)} -1}}{1+y_{s}^{\beta^{(i)} }}\exp \left\{ -\alpha^{(i)} \left[ M_{q,s}\ln \left( 1+y_{s}^{\beta^{(i)} }\right) \right] \right\}. \end{eqnarray} (6.3)

    From (6.3), we simply obtain the predictive survival function of Y_{s:\ell :N} , given generalized PHCS, as

    \begin{eqnarray} \bar{F}^{\ast }(t|\underline{\bf{x}}) & = &\int\limits_{t}^{\infty }f_{Y_{s:\ell :N}}^{\ast }(y_{s}|\underline{\bf{x}})dy_{s} \\ & = &\frac{c\left( N,s\right)}{N-B}\sum\limits_{i = 1}^{N-B} \sum\limits_{q = 0}^{s-1}\frac{c_{q,s-1}}{M_{q,s}} \left(1+t^{\beta^{(i)}}\right)^{-\alpha^{(i)} M_{q,s}}. \end{eqnarray} (6.4)

    The Bayesian point predictor of Y_{s:\ell :N} , 1\leq s\leq m , under the squared error loss function is the mean of the predictive density, given by

    \begin{equation} \widehat{Y}_{s:\ell :N} = \int\limits_{0}^{\infty }y_{s}f_{Y_{s:\ell :N}}^{\ast }(y_{s}|\underline{\bf{x}})dy_{s}, \end{equation} (6.5)

    where f_{Y_{s:\ell :N}}^{\ast }(y_{s}|\underline{\bf{x}}) is given as in (6.3).

    The Bayesian predictive bounds of 100(1-\gamma)\% ET interval for Y_{s:\ell :N} , 1\leq s\leq m , can be obtained by solving the following two equations:

    \begin{equation} \bar{F}_{Y_{s:\ell :N}}^{\ast }(L_{ET}|\underline{\bf{x}}) = \frac{\gamma }{2}\quad \quad {\rm{and}}\quad \quad \bar{F}_{Y_{s:\ell :N}}^{\ast }(U_{ET}| \underline{\bf{x}}) = 1-\frac{\gamma }{2}, \end{equation} (6.6)

    where \bar{F}_{Y_{s:\ell :N}}^{\ast }(t|\underline{\bf{x}}) is given as in (6.4), and L_{ET} and U_{ET} denote the lower and upper bounds, respectively. For the HPD method, the following two equations need to be solved:

    \begin{equation*} \bar{F}_{Y_{s:\ell :N}}^{\ast }(L_{HPD}|\underline{\bf{x}})-\bar{F} _{Y_{s:\ell :N}}^{\ast }(U_{HPD}|\underline{\bf{x}}) = 1-\gamma, \end{equation*}

    and

    \begin{equation*} f_{Y_{s:\ell :N}}^{\ast }(L_{HPD}|\underline{\bf{x}})-f_{Y_{s:\ell :N}}^{\ast }(U_{HPD}|\underline{\bf{x}}) = 0, \end{equation*}

    where f_{Y_{s:\ell :N}}^{\ast }(y_{s}|\underline{\bf{x}}) is as in (6.3), and L_{HPD} and U_{HPD} denote the HPD lower and upper bounds, respectively.

    Before progressing further, first we describe how we generate generalized PHCS data for a given set n , m , k , R_{1}, R_{2}, ..., R_{m} and T . We use the transformation suggested in Balakrishnan and Aggarwala [1].

    Thus, the generalized PHCS data can be easily generated as follows. If T < X_{k:m:n} < X_{m:m:n} , then we have Case Ⅰ and by using the transformation suggested in Ng et al. [27] and then the corresponding generalized PHCS is (X_{1:m:n}, ..., X_{D:m:n}, X_{D+1:m:n}..., X_{k:m:n}) . If X_{k:m:n} < T < X_{m:m:n} , then we have Case Ⅱ and we find D such that X_{D:m:n} < T < X_{D+1:m:n} . The corresponding generalized PHCS is (X_{1:m:n}, ..., X_{D:m:n}) . If X_{k:m:n} < X_{m:m:n} < T , then we have Case Ⅲ and the corresponding generalized PHCS is (X_{1:m:n}, ..., X_{m:m:n}) .

    In this section, Monte Carlo simulation study is carried out to compare Performance under various ML and Bayesian estimates Schemes for sampling. Different values for n, \; m, \; k and T are used for generating 2000 generalized PHCS from the Burr Type-Ⅻ distribution (with \alpha = 2 and \beta = 1 ). The values of T are chosen to ensure that the three cases of generalized PHCS are represented. So, T_1 is chosen to be in the first quarter of data, while T_2 is chosen around the mean. Finally, T_3 is chosen to be within the third quarter. They chosen to be (T_1, T_2, T_3) = (0.5, 1.5, 2.5) . For the purpose of comparison, we computed the ML estimate and Bayesian estimates of \alpha and \beta under the SE, LINEX (with \upsilon = 0.5) and GE (with \kappa = 0.5) loss functions using informative priors (IP) and non-informative prior (NIP). Also, we have computed the mean square error (MSE) and the estimated expected bias (EB) for each estimate.

    Different samples of size (n) are used for performing the simulation study with different effective samples sizes (m, k) . While, the process of removing the survival units are executed using these censoring schemes

    1. cheme 1: R_{i} = \frac{2\left(n-m\right) }{m} if i is odd and R_{i} = 0 if i is even.

    2. Scheme 2: R_{i} = \frac{2\left(n-m\right) }{m} if i is even and R_{i} = 0 if i is odd.

    3. Scheme 3: R_{i} = 0 for i = 1, 2, ..., D^{*} , R_{i} = n-D^{*} for i = D^{*} .

    All these cases adopted a cording to the case of generalized progressive censoring and all Bayesian results are computed based on two different choices of the hyper parameters (a, b, c, d) , namely,

    1. Informative prior (IP): a = 80 , b = 20 , c = 20 and d = 40 (by letting the mean of the marginal prior distribution of \alpha is 2 and its variance is 0.05, and the mean of the marginal prior distribution of \beta is 1 and its variance is 0.05).

    2. Non-informative prior (NIP): a = b = c = d = 0 .

    The 90% and 95% asymptotic confidence intervals and Bayesian credible intervals for \widehat{\alpha } , \widehat{\beta } , \widehat{R(t)} , and \widehat{h(t)} are constructed and its estimated average length (AL) is computed, also, the estimated coverage probabilities (CP) for these intervals were computed as the number of intervals that covered the true values divided by 2000 . The credible intervals are obtained under informative and non-informative priors.

    Tables 14 are present the values of MSE and EB of the ML and Bayesian estimates for \alpha , \beta , S(x) , and h(t) respectively based on different values of T under three different censoring schemes. While, Tables 58 are present the AL of 90% and 95% confidence intervals and corresponding CP for \widehat{\alpha } , \widehat{\beta } , \widehat{R(t)} , and \widehat{h(t)} , respectively.

    Table 1.  MSE and EB of the ML and Bayesian estimates for \alpha based on the different censoring schemes.
    Bayesian
    \widehat{\alpha }_{BS} \widehat{\alpha }_{BL} \widehat{\alpha }_{BE}
    T (n, m, k) Sch. \widehat{\alpha }_{ML} IP NIP IP NIP IP NIP
    MSE
    0.5 (30, 20, 15) 1 0.7260 0.0376 0.8161 0.0367 0.5049 0.0374 0.5752
    (50, 20, 15) 1 0.7409 0.0295 0.8759 0.0288 0.5410 0.0292 0.6194
    (60, 30, 20) 1 0.5895 0.0266 0.6700 0.0261 0.4116 0.0263 0.5236
    (30, 20, 15) 2 0.7380 0.0387 0.8129 0.0377 0.5363 0.0384 0.5952
    (50, 20, 15) 2 0.7841 0.0303 0.8880 0.0298 0.5606 0.0303 0.6398
    (60, 30, 20) 2 0.4400 0.0268 0.4618 0.0265 0.3516 0.0268 0.3783
    (30, 20, 15) 3 0.7380 0.0387 0.8129 0.0377 0.5363 0.0384 0.5952
    (50, 20, 15) 3 2.0293 0.0266 3.5509 0.0260 1.0981 0.0263 1.7344
    (60, 30, 20) 3 0.4880 0.0249 0.5619 0.0245 0.3979 0.0248 0.4265
    1.5 (30, 20, 15) 1 0.4113 0.0398 0.4101 0.0387 0.3252 0.0393 0.3451
    (50, 20, 15) 1 0.9143 0.0313 0.9600 0.0306 0.5785 0.0310 0.6852
    (60, 30, 20) 1 0.3014 0.0265 0.3088 0.0261 0.2532 0.0264 0.2628
    (30, 20, 15) 2 0.4984 0.0392 0.5099 0.0380 0.3805 0.0384 0.4113
    (50, 20, 15) 2 0.7459 0.0294 0.8166 0.0288 0.5127 0.0292 0.5813
    (60, 30, 20) 2 0.3241 0.0252 0.3285 0.0246 0.2667 0.0249 0.2760
    (30, 20, 15) 3 0.7213 0.0378 0.8757 0.0366 0.5061 0.0370 0.6300
    (50, 20, 15) 3 1.7815 0.0281 2.6421 0.0276 0.9627 0.0279 1.4524
    (60, 30, 20) 3 0.5098 0.0234 0.5700 0.0229 0.3964 0.0232 0.4295
    2.5 (30, 20, 15) 1 0.4062 0.0384 0.4109 0.0372 0.3207 0.0377 0.3382
    (50, 20, 15) 1 0.8415 0.0348 0.9189 0.0341 0.5733 0.0346 0.6804
    (60, 30, 20) 1 0.3611 0.0269 0.3754 0.0263 0.2956 0.0265 0.3132
    (30, 20, 15) 2 0.4813 0.0407 0.4905 0.0394 0.3771 0.0399 0.3975
    (50, 20, 15) 2 0.8843 0.0315 1.0270 0.0305 0.6038 0.0307 0.7125
    (60, 30, 20) 2 0.3043 0.0273 0.3211 0.0266 0.2550 0.0268 0.2673
    (30, 20, 15) 3 0.6830 0.0378 0.8127 0.0366 0.5044 0.0371 0.5854
    (50, 20, 15) 3 2.1179 0.0280 3.1983 0.0275 1.0682 0.0278 1.6650
    (60, 30, 20) 3 0.5474 0.0252 0.6221 0.0248 0.4210 0.0251 0.4634
    EB
    0.5 (30, 20, 15) 1 0.2803 0.0055 0.3050 0.0106 0.1576 0.0186 0.1316
    (50, 20, 15) 1 0.2594 0.0080 0.3000 0.0055 0.1385 0.0122 0.1100
    (60, 30, 20) 1 0.1742 0.0088 0.1950 0.0029 0.0998 0.0087 0.0773
    (30, 20, 15) 2 0.2887 0.0053 0.3100 0.0108 0.1646 0.0188 0.1375
    (50, 20, 15) 2 0.2532 0.0009 0.2830 0.0122 0.1273 0.0188 0.0998
    (60, 30, 20) 2 0.1582 0.0003 0.1720 0.0111 0.0834 0.0168 0.0583
    (30, 20, 15) 3 0.2754 0.0053 0.3100 0.0108 0.1646 0.0188 0.1375
    (50, 20, 15) 3 0.5515 0.0049 0.7280 0.0074 0.3695 0.0136 0.4064
    (60, 30, 20) 3 0.2281 0.0034 0.2630 0.0071 0.1701 0.0124 0.1541
    1.5 (30, 20, 15) 1 0.1591 0.0116 0.1440 0.0044 0.0670 0.0123 0.0422
    (50, 20, 15) 1 0.3156 0.0063 0.3150 0.0071 0.1819 0.0137 0.1670
    (60, 30, 20) 1 0.1406 0.0029 0.1360 0.0083 0.0798 0.0139 0.0620
    (30, 20, 15) 2 0.2166 0.0168 0.2050 0.0010 0.1208 0.0068 0.0993
    (50, 20, 15) 2 0.2976 0.0056 0.3120 0.0075 0.1818 0.0140 0.1642
    (60, 30, 20) 2 0.1721 0.0079 0.1630 0.0033 0.1052 0.0088 0.0881
    (30, 20, 15) 3 0.2681 0.0165 0.2840 0.0013 0.1738 0.0061 0.1619
    (50, 20, 15) 3 0.4818 0.0022 0.6330 0.0101 0.3298 0.0163 0.3489
    (60, 30, 20) 3 0.2300 0.0060 0.2620 0.0046 0.1701 0.0098 0.1552
    2.5 (30, 20, 15) 1 0.1914 0.0141 0.1770 0.0018 0.1018 0.0097 0.0794
    (50, 20, 15) 1 0.2786 0.0015 0.2830 0.0118 0.1596 0.0183 0.1438
    (60, 30, 20) 1 0.1764 0.0080 0.1730 0.0033 0.1150 0.0089 0.0995
    (30, 20, 15) 2 0.2281 0.0159 0.2170 0.0001 0.1368 0.0079 0.1165
    (50, 20, 15) 2 0.3657 0.0178 0.3810 0.0046 0.2358 0.0018 0.2236
    (60, 30, 20) 2 0.1573 0.0137 0.1540 0.0025 0.0978 0.0030 0.0814
    (30, 20, 15) 3 0.2831 0.0164 0.3000 0.0011 0.1882 0.0063 0.1754
    (50, 20, 15) 3 0.5227 0.0027 0.6720 0.0096 0.3500 0.0158 0.3758
    (60, 30, 20) 3 0.2453 0.0001 0.2750 0.0105 0.1795 0.0158 0.1656

     | Show Table
    DownLoad: CSV
    Table 2.  MSE and EB of the ML and Bayesian estimates for \beta based on the different censoring schemes.
    Bayesian
    \widehat{\beta }_{BS} \widehat{\beta }_{BL} \widehat{\beta }_{BE}
    T (n, m, k) Sch. \widehat{\beta }_{ML} IP NIP IP NIP IP NIP
    MSE
    0.5 (30, 20, 15) 1 0.0546 0.0083 0.0556 0.0081 0.0519 0.0080 0.0492
    (50, 20, 15) 1 0.0409 0.0064 0.0415 0.0064 0.0396 0.0063 0.0385
    (60, 30, 20) 1 0.0315 0.0058 0.0329 0.0057 0.0318 0.0057 0.0312
    (30, 20, 15) 2 0.0612 0.0086 0.0629 0.0084 0.0588 0.0082 0.0560
    (50, 20, 15) 2 0.0413 0.0068 0.0418 0.0067 0.0399 0.0066 0.0389
    (60, 30, 20) 2 0.0286 0.0057 0.0293 0.0056 0.0283 0.0055 0.0277
    (30, 20, 15) 3 0.0612 0.0086 0.0629 0.0084 0.0588 0.0082 0.0560
    (50, 20, 15) 3 0.0616 0.0058 0.0650 0.0058 0.0604 0.0057 0.0569
    (60, 30, 20) 3 0.0330 0.0054 0.0347 0.0053 0.0332 0.0052 0.0319
    1.5 (30, 20, 15) 1 0.0383 0.0087 0.0378 0.0085 0.0362 0.0083 0.0353
    (50, 20, 15) 1 0.0405 0.0067 0.0399 0.0066 0.0381 0.0064 0.0368
    (60, 30, 20) 1 0.0249 0.0059 0.0250 0.0058 0.0243 0.0057 0.0238
    (30, 20, 15) 2 0.0382 0.0083 0.0382 0.0082 0.0364 0.0080 0.0350
    (50, 20, 15) 2 0.0341 0.0065 0.0339 0.0064 0.0323 0.0063 0.0310
    (60, 30, 20) 2 0.0235 0.0052 0.0236 0.0051 0.0228 0.0051 0.0222
    (30, 20, 15) 3 0.0466 0.0078 0.0467 0.0077 0.0440 0.0076 0.0419
    (50, 20, 15) 3 0.0570 0.0064 0.0606 0.0063 0.0567 0.0061 0.0536
    (60, 30, 20) 3 0.0309 0.0051 0.0320 0.0050 0.0306 0.0050 0.0295
    2.5 (30, 20, 15) 1 0.0401 0.0086 0.0396 0.0084 0.0378 0.0083 0.0366
    (50, 20, 15) 1 0.0398 0.0075 0.0398 0.0074 0.0381 0.0073 0.0369
    (60, 30, 20) 1 0.0246 0.0059 0.0248 0.0058 0.0240 0.0057 0.0234
    (30, 20, 15) 2 0.0410 0.0086 0.0412 0.0084 0.0393 0.0083 0.0378
    (50, 20, 15) 2 0.0372 0.0066 0.0374 0.0066 0.0356 0.0065 0.0340
    (60, 30, 20) 2 0.0214 0.0057 0.0216 0.0056 0.0209 0.0055 0.0205
    (30, 20, 15) 3 0.0534 0.0084 0.0544 0.0082 0.0512 0.0081 0.0489
    (50, 20, 15) 3 0.0625 0.0063 0.0646 0.0062 0.0603 0.0061 0.0571
    (60, 30, 20) 3 0.0338 0.0056 0.0345 0.0055 0.0329 0.0054 0.0315
    EB
    0.5 (30, 20, 15) 1 0.0741 0.0134 0.0697 0.0104 0.0582 0.0048 0.0380
    (50, 20, 15) 1 0.0451 0.0071 0.0404 0.0049 0.0314 0.0007 0.0146
    (60, 30, 20) 1 0.0331 0.0059 0.0298 0.0040 0.0231 0.0004 0.0106
    (30, 20, 15) 2 0.0789 0.0130 0.0738 0.0101 0.0620 0.0045 0.0418
    (50, 20, 15) 2 0.0466 0.0101 0.0397 0.0080 0.0308 0.0038 0.0143
    (60, 30, 20) 2 0.0349 0.0091 0.0307 0.0073 0.0242 0.0038 0.0119
    (30, 20, 15) 3 0.0789 0.0130 0.0738 0.0101 0.0620 0.0045 0.0418
    (50, 20, 15) 3 0.0935 0.0091 0.0935 0.0071 0.0822 0.0032 0.0635
    (60, 30, 20) 3 0.0538 0.0082 0.0543 0.0064 0.0477 0.0030 0.0358
    1.5 (30, 20, 15) 1 0.0446 0.0105 0.0379 0.0076 0.0298 0.0019 0.0148
    (50, 20, 15) 1 0.0594 0.0096 0.0512 0.0074 0.0438 0.0031 0.0304
    (60, 30, 20) 1 0.0359 0.0080 0.0323 0.0062 0.0274 0.0026 0.0182
    (30, 20, 15) 2 0.0584 0.0103 0.0535 0.0074 0.0451 0.0019 0.0298
    (50, 20, 15) 2 0.0568 0.0091 0.0521 0.0070 0.0447 0.0029 0.0312
    (60, 30, 20) 2 0.0407 0.0070 0.0344 0.0052 0.0295 0.0017 0.0204
    (30, 20, 15) 3 0.0688 0.0088 0.0649 0.0060 0.0554 0.0006 0.0388
    (50, 20, 15) 3 0.0875 0.0105 0.0894 0.0085 0.0785 0.0046 0.0601
    (60, 30, 20) 3 0.0517 0.0067 0.0511 0.0050 0.0447 0.0016 0.0331
    2.5 (30, 20, 15) 1 0.0568 0.0113 0.0498 0.0084 0.0420 0.0027 0.0278
    (50, 20, 15) 1 0.0585 0.0115 0.0519 0.0093 0.0447 0.0051 0.0316
    (60, 30, 20) 1 0.0425 0.0079 0.0400 0.0060 0.0352 0.0025 0.0263
    (30, 20, 15) 2 0.0631 0.0108 0.0582 0.0078 0.0503 0.0022 0.0360
    (50, 20, 15) 2 0.0659 0.0056 0.0590 0.0035 0.0516 0.0006 0.0381
    (60, 30, 20) 2 0.0318 0.0039 0.0283 0.0021 0.0236 0.0014 0.0146
    (30, 20, 15) 3 0.0748 0.0091 0.0714 0.0063 0.0619 0.0009 0.0456
    (50, 20, 15) 3 0.0913 0.0096 0.0915 0.0076 0.0806 0.0037 0.0622
    (60, 30, 20) 3 0.0592 0.0103 0.0577 0.0085 0.0511 0.0051 0.0394

     | Show Table
    DownLoad: CSV
    Table 3.  MSE and EB of the ML and Bayesian estimates for R(t) at the different censoring schemes.
    Bayesian
    \widehat{R(t) }_{BS} \widehat{R(t) }_{BL} \widehat{R(t) }_{BE}
    T (n, m, k) Sch. \widehat{R(t) }_{ML} IP NIP IP NIP IP NIP
    MSE
    0.5 (30, 20, 15) 1 4.0E-04 3.0E-06 1.2E-03 3.0E-06 1.1E-03 2.0E-06 2.0E-04
    (50, 20, 15) 1 7.0E-04 3.0E-06 2.0E-03 3.0E-06 1.9E-03 1.0E-06 3.0E-04
    (60, 30, 20) 1 5.0E-04 3.0E-06 1.2E-03 3.0E-06 1.1E-03 2.0E-06 3.0E-04
    (30, 20, 15) 2 5.0E-04 3.0E-06 1.4E-03 3.0E-06 1.3E-03 2.0E-06 3.0E-04
    (50, 20, 15) 2 9.0E-04 3.0E-06 2.2E-03 3.0E-06 2.1E-03 2.0E-06 4.0E-04
    (60, 30, 20) 2 6.0E-04 3.0E-06 1.3E-03 3.0E-06 1.2E-03 2.0E-06 4.0E-04
    (30, 20, 15) 3 5.0E-04 3.0E-06 1.4E-03 3.0E-06 1.3E-03 2.0E-06 3.0E-04
    (50, 20, 15) 3 5.0E-04 2.0E-06 1.3E-03 2.0E-06 1.2E-03 1.0E-06 3.0E-04
    (60, 30, 20) 3 3.0E-04 3.0E-06 6.0E-04 3.0E-06 6.0E-04 2.0E-06 2.0E-04
    1.5 (30, 20, 15) 1 4.0E-04 4.0E-06 9.0E-04 4.0E-06 8.0E-04 2.0E-06 2.0E-04
    (50, 20, 15) 1 5.0E-04 3.0E-06 1.1E-03 3.0E-06 1.1E-03 2.0E-06 3.0E-04
    (60, 30, 20) 1 2.0E-04 4.0E-06 5.0E-04 4.0E-06 5.0E-04 2.0E-06 2.0E-04
    (30, 20, 15) 2 4.0E-04 4.0E-06 8.0E-04 4.0E-06 8.0E-04 2.0E-06 3.0E-04
    (50, 20, 15) 2 4.0E-04 3.0E-06 9.0E-04 3.0E-06 9.0E-04 2.0E-06 2.0E-04
    (60, 30, 20) 2 2.0E-04 4.0E-06 4.0E-04 4.0E-06 4.0E-04 2.0E-06 1.0E-04
    (30, 20, 15) 3 2.0E-04 3.0E-06 5.0E-04 3.0E-06 5.0E-04 2.0E-06 2.0E-04
    (50, 20, 15) 3 5.0E-04 3.0E-06 1.3E-03 3.0E-06 1.2E-03 1.0E-06 3.0E-04
    (60, 30, 20) 3 2.0E-04 3.0E-06 5.0E-04 3.0E-06 5.0E-04 2.0E-06 1.0E-04
    2.5 (30, 20, 15) 1 2.0E-04 3.0E-06 6.0E-04 3.0E-06 6.0E-04 2.0E-06 2.0E-04
    (50, 20, 15) 1 4.0E-04 4.0E-06 9.0E-04 4.0E-06 9.0E-04 2.0E-06 3.0E-04
    (60, 30, 20) 1 2.0E-04 4.0E-06 4.0E-04 4.0E-06 4.0E-04 2.0E-06 1.0E-04
    (30, 20, 15) 2 3.0E-04 4.0E-06 6.0E-04 4.0E-06 6.0E-04 2.0E-06 2.0E-04
    (50, 20, 15) 2 3.0E-04 3.0E-06 7.0E-04 3.0E-06 7.0E-04 2.0E-06 2.0E-04
    (60, 30, 20) 2 2.0E-04 4.0E-06 3.0E-04 4.0E-06 3.0E-04 2.0E-06 1.0E-04
    (30, 20, 15) 3 2.0E-04 3.0E-06 5.0E-04 3.0E-06 5.0E-04 2.0E-06 1.0E-04
    (50, 20, 15) 3 6.0E-04 3.0E-06 1.4E-03 3.0E-06 1.3E-03 1.0E-06 3.0E-04
    (60, 30, 20) 3 2.0E-04 3.0E-06 5.0E-04 3.0E-06 5.0E-04 2.0E-06 1.0E-04
    EB
    0.5 (30, 20, 15) 1 5.2E-03 1.1E-03 2.1E-02 1.1E-03 2.0E-02 5.0E-04 1.1E-03
    (50, 20, 15) 1 8.8E-03 1.1E-03 2.7E-02 1.1E-03 2.6E-02 5.0E-04 1.0E-05
    (60, 30, 20) 1 6.7E-03 1.0E-03 1.9E-02 1.0E-03 1.9E-02 5.0E-04 3.0E-04
    (30, 20, 15) 2 6.1E-03 1.1E-03 2.2E-02 1.1E-03 2.2E-02 5.0E-04 2.0E-04
    (50, 20, 15) 2 9.9E-03 1.1E-03 2.8E-02 1.1E-03 2.7E-02 5.0E-04 8.0E-04
    (60, 30, 20) 2 7.1E-03 1.1E-03 1.9E-02 1.1E-03 1.9E-02 5.0E-04 8.0E-04
    (30, 20, 15) 3 6.1E-03 1.1E-03 2.2E-02 1.1E-03 2.2E-02 5.0E-04 2.0E-04
    (50, 20, 15) 3 5.5E-03 1.0E-03 2.0E-02 1.0E-03 2.0E-02 6.0E-04 1.9E-03
    (60, 30, 20) 3 4.1E-03 1.0E-03 1.3E-02 1.0E-03 1.3E-02 5.0E-04 8.0E-04
    1.5 (30, 20, 15) 1 5.1E-03 1.1E-03 1.7E-02 1.0E-03 1.6E-02 5.0E-04 7.0E-04
    (50, 20, 15) 1 5.4E-03 1.0E-03 1.8E-02 1.0E-03 1.8E-02 5.0E-04 5.0E-04
    (60, 30, 20) 1 3.6E-03 1.1E-03 1.1E-02 1.0E-03 1.1E-02 4.0E-04 2.0E-04
    (30, 20, 15) 2 4.1E-03 9.0E-04 1.5E-02 9.0E-04 1.5E-02 6.0E-04 2.0E-05
    (50, 20, 15) 2 4.6E-03 1.0E-03 1.6E-02 1.0E-03 1.6E-02 5.0E-04 6.0E-04
    (60, 30, 20) 2 2.9E-03 1.0E-03 1.1E-02 1.0E-03 1.0E-02 5.0E-04 2.0E-04
    (30, 20, 15) 3 2.6E-03 1.0E-03 1.3E-02 9.0E-04 1.3E-02 6.0E-04 1.5E-03
    (50, 20, 15) 3 5.2E-03 1.0E-03 2.0E-02 1.0E-03 2.0E-02 6.0E-04 2.1E-03
    (60, 30, 20) 3 3.0E-03 1.0E-03 1.2E-02 1.0E-03 1.2E-02 5.0E-04 1.4E-03
    2.5 (30, 20, 15) 1 3.3E-03 9.0E-04 1.3E-02 9.0E-04 1.3E-02 6.0E-04 3.0E-04
    (50, 20, 15) 1 5.1E-03 1.1E-03 1.7E-02 1.1E-03 1.7E-02 5.0E-04 1.0E-04
    (60, 30, 20) 1 2.4E-03 9.0E-04 9.4E-03 9.0E-04 9.3E-03 5.0E-04 6.0E-04
    (30, 20, 15) 2 3.3E-03 1.0E-03 1.3E-02 9.0E-04 1.3E-02 6.0E-04 2.0E-04
    (50, 20, 15) 2 2.8E-03 9.0E-04 1.4E-02 9.0E-04 1.4E-02 6.0E-04 1.6E-03
    (60, 30, 20) 2 2.5E-03 1.0E-03 9.7E-03 1.0E-03 9.6E-03 5.0E-04 6.0E-04
    (30, 20, 15) 3 2.9E-03 9.0E-04 1.3E-02 9.0E-04 1.3E-02 6.0E-04 1.3E-03
    (50, 20, 15) 3 5.9E-03 1.1E-03 2.1E-02 1.0E-03 2.0E-02 5.0E-04 1.5E-03
    (60, 30, 20) 3 3.1E-03 1.0E-03 1.2E-02 1.0E-03 1.2E-02 5.0E-04 1.4E-03

     | Show Table
    DownLoad: CSV
    Table 4.  MSE and EB of the ML and Bayesian estimates for h(t) at different censoring schemes.
    Bayesian
    \widehat{h(t)}_{BS} \widehat{h(t)}_{BL} \widehat{h(t)}_{BE}
    T (n, m, k) Sch. \widehat{h(t)}_{ML} IP NIP IP NIP IP NIP
    MSE
    0.5 (30, 20, 15) 1 2.3E-02 3.0E-05 3.0E-02 3.0E-05 2.5E-02 3.0E-05 1.6E-02
    (50, 20, 15) 1 2.0E-02 3.0E-05 2.7E-02 3.0E-05 2.3E-02 3.0E-05 1.5E-02
    (60, 30, 20) 1 2.2E-02 4.0E-05 2.9E-02 4.0E-05 2.1E-02 4.0E-05 1.8E-02
    (30, 20, 15) 2 2.4E-02 3.0E-05 3.0E-02 3.0E-05 2.6E-02 4.0E-05 1.7E-02
    (50, 20, 15) 2 2.0E-02 3.0E-05 2.6E-02 3.0E-05 2.3E-02 3.0E-05 1.5E-02
    (60, 30, 20) 2 1.1E-02 4.0E-05 1.3E-02 4.0E-05 1.2E-02 4.0E-05 8.7E-03
    (30, 20, 15) 3 2.4E-02 3.0E-05 3.0E-02 3.0E-05 2.6E-02 4.0E-05 1.7E-02
    (50, 20, 15) 3 6.8E-02 3.0E-05 1.7E-01 3.0E-05 8.6E-02 3.0E-05 5.2E-02
    (60, 30, 20) 3 1.4E-02 4.0E-05 1.8E-02 4.0E-05 1.6E-02 4.0E-05 1.1E-02
    1.5 (30, 20, 15) 1 9.8E-03 4.0E-05 1.0E-02 4.0E-05 9.7E-03 5.0E-05 7.3E-03
    (50, 20, 15) 1 2.6E-02 4.0E-05 3.1E-02 4.0E-05 2.6E-02 4.0E-05 1.7E-02
    (60, 30, 20) 1 7.7E-03 6.0E-05 8.4E-03 6.0E-05 8.0E-03 6.0E-05 6.2E-03
    (30, 20, 15) 2 1.2E-02 4.0E-05 1.4E-02 4.0E-05 1.3E-02 5.0E-05 9.1E-03
    (50, 20, 15) 2 1.8E-02 4.0E-05 2.2E-02 4.0E-05 1.9E-02 4.0E-05 1.2E-02
    (60, 30, 20) 2 7.8E-03 5.0E-05 8.5E-03 5.0E-05 8.1E-03 6.0E-05 6.2E-03
    (30, 20, 15) 3 2.4E-02 4.0E-05 3.3E-02 4.0E-05 2.6E-02 4.0E-05 1.8E-02
    (50, 20, 15) 3 6.0E-02 3.0E-05 1.1E-01 3.0E-05 7.0E-02 3.0E-05 4.4E-02
    (60, 30, 20) 3 1.4E-02 4.0E-05 1.8E-02 4.0E-05 1.6E-02 4.0E-05 1.1E-02
    2.5 (30, 20, 15) 1 1.1E-02 5.0E-05 1.2E-02 5.0E-05 1.1E-02 5.0E-05 8.2E-03
    (50, 20, 15) 1 2.2E-02 4.0E-05 2.7E-02 4.0E-05 2.3E-02 4.0E-05 1.6E-02
    (60, 30, 20) 1 9.1E-03 5.0E-05 1.0E-02 6.0E-05 9.5E-03 6.0E-05 7.3E-03
    (30, 20, 15) 2 1.2E-02 5.0E-05 1.3E-02 5.0E-05 1.3E-02 5.0E-05 9.0E-03
    (50, 20, 15) 2 2.4E-02 3.0E-05 3.2E-02 3.0E-05 2.7E-02 4.0E-05 1.8E-02
    (60, 30, 20) 2 6.9E-03 5.0E-05 7.9E-03 5.0E-05 7.4E-03 5.0E-05 5.7E-03
    (30, 20, 15) 3 2.3E-02 4.0E-05 3.3E-02 4.0E-05 2.6E-02 4.0E-05 1.8E-02
    (50, 20, 15) 3 7.9E-02 3.0E-05 1.5E-01 3.0E-05 7.9E-02 3.0E-05 5.4E-02
    (60, 30, 20) 3 1.6E-02 4.0E-05 2.0E-02 4.0E-05 1.8E-02 4.0E-05 1.2E-02
    EB
    0.5 (30, 20, 15) 1 5.5E-02 1.2E-04 6.8E-02 2.3E-04 6.2E-02 1.9E-03 2.5E-02
    (50, 20, 15) 1 4.7E-02 2.6E-04 6.2E-02 3.7E-04 5.6E-02 2.0E-03 1.8E-02
    (60, 30, 20) 1 3.2E-02 2.0E-05 4.2E-02 8.0E-05 3.8E-02 1.7E-03 1.4E-02
    (30, 20, 15) 2 5.8E-02 2.4E-04 7.1E-02 3.5E-04 6.5E-02 2.0E-03 2.7E-02
    (50, 20, 15) 2 4.6E-02 2.5E-04 6.0E-02 3.6E-04 5.4E-02 2.0E-03 1.7E-02
    (60, 30, 20) 2 3.0E-02 1.0E-05 3.7E-02 1.0E-04 3.5E-02 1.7E-03 1.1E-02
    (30, 20, 15) 3 5.8E-02 2.4E-04 7.1E-02 3.5E-04 6.5E-02 2.0E-03 2.7E-02
    (50, 20, 15) 3 1.0E-01 2.4E-04 1.5E-01 1.3E-04 1.2E-01 1.5E-03 6.6E-02
    (60, 30, 20) 3 4.2E-02 2.8E-04 5.4E-02 1.7E-04 5.0E-02 1.4E-03 2.6E-02
    1.5 (30, 20, 15) 1 3.0E-02 2.7E-04 3.1E-02 3.7E-04 2.9E-02 2.0E-03 8.8E-03
    (50, 20, 15) 1 5.6E-02 7.0E-05 6.2E-02 4.0E-05 5.7E-02 1.7E-03 2.8E-02
    (60, 30, 20) 1 2.6E-02 8.0E-06 2.9E-02 1.1E-04 2.7E-02 1.7E-03 1.2E-02
    (30, 20, 15) 2 4.0E-02 2.7E-04 4.2E-02 1.6E-04 4.0E-02 1.5E-03 1.8E-02
    (50, 20, 15) 2 5.0E-02 3.0E-05 6.0E-02 8.0E-05 5.5E-02 1.7E-03 2.6E-02
    (60, 30, 20) 2 3.0E-02 3.5E-04 3.2E-02 2.5E-04 3.0E-02 1.3E-03 1.5E-02
    (30, 20, 15) 3 5.1E-02 9.0E-05 5.9E-02 8.0E-06 5.4E-02 1.7E-03 2.8E-02
    (50, 20, 15) 3 8.8E-02 2.1E-04 1.3E-01 1.0E-04 1.1E-01 1.6E-03 5.7E-02
    (60, 30, 20) 3 4.2E-02 2.6E-04 5.2E-02 1.5E-04 4.9E-02 1.5E-03 2.6E-02
    2.5 (30, 20, 15) 1 3.6E-02 1.7E-04 3.7E-02 7.0E-05 3.5E-02 1.5E-03 1.6E-02
    (50, 20, 15) 1 5.0E-02 1.1E-04 5.7E-02 2.2E-04 5.3E-02 1.8E-03 2.5E-02
    (60, 30, 20) 1 3.1E-02 4.2E-04 3.4E-02 3.2E-04 3.2E-02 1.2E-03 1.8E-02
    (30, 20, 15) 2 4.2E-02 1.8E-04 4.4E-02 8.0E-05 4.1E-02 1.5E-03 2.1E-02
    (50, 20, 15) 2 6.1E-02 3.0E-04 7.1E-02 2.0E-04 6.5E-02 1.4E-03 3.4E-02
    (60, 30, 20) 2 2.6E-02 9.0E-05 2.8E-02 8.0E-06 2.7E-02 1.6E-03 1.2E-02
    (30, 20, 15) 3 5.4E-02 8.0E-05 6.4E-02 3.0E-05 5.8E-02 1.7E-03 3.2E-02
    (50, 20, 15) 3 9.7E-02 6.0E-05 1.4E-01 5.0E-05 1.2E-01 1.7E-03 6.3E-02
    (60, 30, 20) 3 4.5E-02 4.1E-04 5.6E-02 3.0E-04 5.2E-02 1.3E-03 2.8E-02

     | Show Table
    DownLoad: CSV
    Table 5.  The AL of 90% and 95% confidence intervals and corresponding CP for \widehat{\alpha }_{ML} and \widehat{\alpha }_{B} based on the different censoring schemes.
    \widehat{\alpha }_{B}
    \widehat{\alpha }_{ML} IP NIP
    90% 95% 90% 95% 90% 95%
    (n, m, k) Sch. AL CP AL CP AL CP AL CP AL CP AL CP
    T=0.5
    (30, 20, 15) 1 2.485 0.937 2.874 0.969 0.829 0.959 0.980 0.985 2.439 0.881 2.853 0.947
    (50, 20, 15) 1 2.610 0.949 3.008 0.964 0.753 0.966 0.896 0.990 2.554 0.897 3.015 0.944
    (60, 30, 20) 1 1.941 0.925 2.313 0.960 0.699 0.967 0.836 0.989 1.908 0.885 2.283 0.941
    (30, 20, 15) 2 2.479 0.927 2.892 0.959 0.826 0.949 0.981 0.985 2.412 0.876 2.857 0.931
    (50, 20, 15) 2 2.652 0.939 2.964 0.956 0.749 0.974 0.884 0.985 2.621 0.885 2.948 0.942
    (60, 30, 20) 2 1.907 0.917 2.272 0.949 0.697 0.965 0.825 0.988 1.861 0.883 2.233 0.925
    (30, 20, 15) 3 2.204 0.934 2.892 0.959 0.806 0.946 0.981 0.985 2.166 0.897 2.857 0.931
    (50, 20, 15) 3 3.342 0.947 3.935 0.970 0.725 0.967 0.859 0.994 3.575 0.874 4.316 0.931
    (60, 30, 20) 3 1.894 0.932 2.228 0.964 0.675 0.970 0.795 0.984 1.870 0.871 2.235 0.930
    T=1.5
    (30, 20, 15) 1 1.850 0.891 2.150 0.962 0.823 0.953 0.974 0.985 1.793 0.868 2.071 0.946
    (50, 20, 15) 1 2.235 0.943 2.686 0.954 0.749 0.964 0.892 0.984 2.173 0.895 2.622 0.910
    (60, 30, 20) 1 1.548 0.896 1.820 0.965 0.692 0.950 0.819 0.989 1.506 0.873 1.773 0.937
    (30, 20, 15) 2 1.831 0.904 2.204 0.961 0.821 0.963 0.973 0.981 1.769 0.875 2.144 0.936
    (50, 20, 15) 2 2.587 0.923 2.676 0.962 0.748 0.963 0.883 0.984 2.551 0.865 2.617 0.932
    (60, 30, 20) 2 1.519 0.919 1.846 0.960 0.684 0.966 0.816 0.990 1.483 0.893 1.791 0.932
    (30, 20, 15) 3 1.968 0.938 2.380 0.968 0.801 0.959 0.950 0.986 1.919 0.884 2.356 0.939
    (50, 20, 15) 3 3.419 0.943 3.753 0.971 0.728 0.961 0.859 0.988 3.635 0.861 4.000 0.935
    (60, 30, 20) 3 1.856 0.932 2.214 0.968 0.668 0.968 0.794 0.987 1.854 0.877 2.207 0.936
    T=2.5
    (30, 20, 15) 1 1.775 0.919 2.115 0.961 0.823 0.964 0.975 0.985 1.722 0.889 2.042 0.946
    (50, 20, 15) 1 2.185 0.950 2.584 0.964 0.752 0.965 0.885 0.985 2.134 0.882 2.519 0.926
    (60, 30, 20) 1 1.510 0.926 1.826 0.959 0.693 0.956 0.820 0.985 1.467 0.907 1.775 0.932
    (30, 20, 15) 2 1.852 0.897 2.158 0.954 0.817 0.953 0.977 0.979 1.819 0.875 2.104 0.925
    (50, 20, 15) 2 2.333 0.957 2.780 0.970 0.745 0.967 0.887 0.986 2.263 0.881 2.736 0.941
    (60, 30, 20) 2 1.504 0.918 1.803 0.965 0.684 0.964 0.816 0.985 1.461 0.899 1.759 0.932
    (30, 20, 15) 3 1.930 0.943 2.400 0.970 0.805 0.944 0.952 0.990 1.890 0.904 2.391 0.926
    (50, 20, 15) 3 3.492 0.946 3.870 0.962 0.725 0.967 0.856 0.987 3.638 0.854 4.114 0.914
    (60, 30, 20) 3 1.896 0.932 2.240 0.974 0.668 0.956 0.792 0.977 1.877 0.861 2.240 0.936

     | Show Table
    DownLoad: CSV
    Table 6.  The AL of 90\% and 95\% confidence intervals and corresponding CP for \widehat{\beta }_{ML} and \widehat{\beta }_{B} based on the different censoring schemes.
    Bayesian
    \widehat{\beta }_{ML} IP NIP
    90\% 95\% 90\% 95\% 90\% 95\%
    (n, m, k) Sch. AL CP AL CP AL CP AL CP AL CP AL CP
    T = 0.5
    (30, 20, 15) 1 0.718 0.883 0.847 0.964 0.351 0.936 0.416 0.970 0.700 0.874 0.812 0.942
    (50, 20, 15) 1 0.641 0.886 0.753 0.949 0.303 0.922 0.358 0.972 0.619 0.859 0.721 0.932
    (60, 30, 20) 1 0.546 0.897 0.644 0.944 0.281 0.933 0.329 0.968 0.535 0.873 0.619 0.913
    (30, 20, 15) 2 0.717 0.890 0.851 0.954 0.350 0.936 0.413 0.971 0.692 0.875 0.816 0.917
    (50, 20, 15) 2 0.631 0.887 0.747 0.952 0.298 0.936 0.355 0.964 0.613 0.865 0.715 0.930
    (60, 30, 20) 2 0.539 0.900 0.642 0.956 0.279 0.943 0.328 0.963 0.524 0.887 0.614 0.930
    (30, 20, 15) 3 0.704 0.908 0.851 0.954 0.344 0.923 0.413 0.971 0.685 0.874 0.816 0.917
    (50, 20, 15) 3 0.706 0.888 0.831 0.951 0.290 0.924 0.342 0.968 0.680 0.845 0.793 0.919
    (60, 30, 20) 3 0.533 0.907 0.640 0.953 0.270 0.941 0.321 0.971 0.519 0.882 0.616 0.916
    T = 1.5
    (30, 20, 15) 1 0.605 0.899 0.712 0.943 0.352 0.944 0.414 0.965 0.588 0.880 0.683 0.924
    (50, 20, 15) 1 0.573 0.898 0.679 0.941 0.303 0.938 0.360 0.971 0.559 0.878 0.652 0.913
    (60, 30, 20) 1 0.463 0.877 0.551 0.926 0.279 0.921 0.330 0.967 0.452 0.865 0.535 0.909
    (30, 20, 15) 2 0.599 0.894 0.715 0.953 0.347 0.939 0.412 0.972 0.586 0.871 0.695 0.933
    (50, 20, 15) 2 0.631 0.900 0.680 0.951 0.299 0.938 0.353 0.966 0.612 0.881 0.654 0.937
    (60, 30, 20) 2 0.463 0.889 0.551 0.950 0.279 0.941 0.326 0.964 0.454 0.862 0.531 0.928
    (30, 20, 15) 3 0.643 0.904 0.762 0.961 0.346 0.936 0.404 0.977 0.625 0.884 0.736 0.945
    (50, 20, 15) 3 0.695 0.907 0.825 0.955 0.287 0.931 0.343 0.968 0.672 0.859 0.784 0.917
    (60, 30, 20) 3 0.534 0.901 0.633 0.949 0.271 0.936 0.320 0.973 0.520 0.873 0.607 0.928
    T = 2.5
    (30, 20, 15) 1 0.586 0.900 0.697 0.940 0.352 0.933 0.415 0.964 0.575 0.883 0.672 0.920
    (50, 20, 15) 1 0.558 0.910 0.668 0.947 0.301 0.935 0.357 0.951 0.540 0.875 0.643 0.912
    (60, 30, 20) 1 0.454 0.899 0.545 0.930 0.278 0.935 0.330 0.968 0.444 0.881 0.526 0.909
    (30, 20, 15) 2 0.594 0.891 0.698 0.947 0.349 0.933 0.414 0.965 0.582 0.857 0.676 0.915
    (50, 20, 15) 2 0.576 0.889 0.683 0.952 0.301 0.941 0.353 0.961 0.559 0.857 0.656 0.935
    (60, 30, 20) 2 0.460 0.873 0.539 0.952 0.279 0.921 0.325 0.969 0.446 0.855 0.525 0.934
    (30, 20, 15) 3 0.640 0.903 0.766 0.952 0.348 0.924 0.405 0.973 0.624 0.878 0.735 0.928
    (50, 20, 15) 3 0.703 0.896 0.828 0.944 0.289 0.937 0.343 0.960 0.672 0.861 0.782 0.905
    (60, 30, 20) 3 0.535 0.910 0.638 0.951 0.270 0.930 0.322 0.968 0.521 0.886 0.616 0.922

     | Show Table
    DownLoad: CSV
    Table 7.  The AL of 90\% and 95\% confidence intervals and corresponding CP for \widehat{R(t) }_{ML} and \widehat{R(t) }_{B} based on the different censoring schemes.
    \widehat{R(t) }_{B}
    \widehat{R(t) }_{ML} IP NIP
    90\% 95\% 90\% 95\% 90\% 95\%
    (n, m, k) Sch. AL CP AL CP AL CP AL CP AL CP AL CP
    T = 0.5
    (30, 20, 15) 1 0.059 0.669 0.073 0.712 0.014 1.000 0.017 1.000 0.092 0.887 0.123 0.940
    (50, 20, 15) 1 0.075 0.701 0.094 0.726 0.014 1.000 0.017 1.000 0.109 0.884 0.144 0.939
    (60, 30, 20) 1 0.061 0.730 0.073 0.758 0.014 1.000 0.017 1.000 0.082 0.872 0.105 0.936
    (30, 20, 15) 2 0.059 0.675 0.075 0.687 0.014 1.000 0.018 1.000 0.093 0.886 0.125 0.934
    (50, 20, 15) 2 0.074 0.685 0.095 0.716 0.014 1.000 0.018 1.000 0.106 0.869 0.143 0.945
    (60, 30, 20) 2 0.061 0.740 0.074 0.767 0.014 1.000 0.017 1.000 0.084 0.886 0.104 0.925
    (30, 20, 15) 3 0.067 0.711 0.075 0.687 0.014 0.999 0.018 1.000 0.096 0.884 0.125 0.934
    (50, 20, 15) 3 0.060 0.607 0.076 0.652 0.014 1.000 0.017 1.000 0.088 0.854 0.120 0.917
    (60, 30, 20) 3 0.047 0.707 0.057 0.721 0.014 1.000 0.017 1.000 0.062 0.866 0.080 0.927
    T = 1.5
    (30, 20, 15) 1 0.049 0.717 0.063 0.764 0.014 1.000 0.017 1.000 0.071 0.868 0.095 0.937
    (50, 20, 15) 1 0.049 0.661 0.065 0.713 0.014 1.000 0.017 1.000 0.075 0.890 0.103 0.910
    (60, 30, 20) 1 0.040 0.738 0.050 0.781 0.014 1.000 0.017 1.000 0.053 0.879 0.070 0.920
    (30, 20, 15) 2 0.049 0.722 0.057 0.716 0.014 1.000 0.017 1.000 0.070 0.875 0.089 0.935
    (50, 20, 15) 2 0.075 0.664 0.063 0.716 0.014 1.000 0.017 1.000 0.107 0.877 0.099 0.931
    (60, 30, 20) 2 0.040 0.746 0.047 0.746 0.014 1.000 0.017 1.000 0.053 0.878 0.067 0.921
    (30, 20, 15) 3 0.046 0.696 0.053 0.728 0.014 1.000 0.017 1.000 0.066 0.873 0.085 0.942
    (50, 20, 15) 3 0.063 0.622 0.075 0.677 0.014 1.000 0.017 1.000 0.092 0.857 0.122 0.924
    (60, 30, 20) 3 0.045 0.730 0.053 0.741 0.014 1.000 0.017 1.000 0.059 0.871 0.076 0.928
    T = 2.5
    (30, 20, 15) 1 0.045 0.718 0.053 0.748 0.014 1.000 0.017 1.000 0.064 0.883 0.083 0.930
    (50, 20, 15) 1 0.051 0.689 0.064 0.722 0.014 1.000 0.017 1.000 0.075 0.874 0.101 0.925
    (60, 30, 20) 1 0.038 0.757 0.045 0.764 0.014 1.000 0.016 1.000 0.051 0.912 0.064 0.920
    (30, 20, 15) 2 0.042 0.695 0.052 0.708 0.014 1.000 0.017 1.000 0.060 0.870 0.080 0.919
    (50, 20, 15) 2 0.049 0.668 0.056 0.697 0.014 0.999 0.017 1.000 0.074 0.861 0.094 0.941
    (60, 30, 20) 2 0.037 0.752 0.045 0.789 0.014 1.000 0.016 1.000 0.050 0.878 0.065 0.925
    (30, 20, 15) 3 0.044 0.716 0.053 0.712 0.014 1.000 0.017 1.000 0.065 0.911 0.084 0.932
    (50, 20, 15) 3 0.060 0.622 0.076 0.667 0.014 1.000 0.017 1.000 0.089 0.849 0.121 0.918
    (60, 30, 20) 3 0.044 0.715 0.053 0.717 0.014 1.000 0.017 1.000 0.059 0.864 0.076 0.930

     | Show Table
    DownLoad: CSV
    Table 8.  The AL of 90% and 95% confidence intervals and corresponding CP for \widehat{h(t) }_{ML} and \widehat{h(t) }_{B} based on the different censoring schemes.
    \widehat{h(t) }_{B}
    \widehat{h(t) }_{ML} IP NIP
    90\% 95\% 90\% 95\% 90\% 95\%
    (n, m, k) Sch. AL CP AL CP AL CP AL CP AL CP AL CP
    T = 0.5
    (30, 20, 15) 1 0.410 0.669 0.467 0.712 0.068 1.000 0.081 1.000 0.413 0.890 0.484 0.944
    (50, 20, 15) 1 0.412 0.701 0.464 0.726 0.068 1.000 0.080 1.000 0.412 0.880 0.486 0.937
    (60, 30, 20) 1 0.299 0.730 0.355 0.758 0.067 1.000 0.080 1.000 0.300 0.870 0.362 0.940
    (30, 20, 15) 2 0.409 0.675 0.475 0.687 0.068 1.000 0.081 1.000 0.405 0.883 0.489 0.930
    (50, 20, 15) 2 0.422 0.685 0.458 0.716 0.068 1.000 0.080 1.000 0.429 0.872 0.475 0.936
    (60, 30, 20) 2 0.290 0.740 0.346 0.767 0.067 1.000 0.080 1.000 0.286 0.884 0.349 0.926
    (30, 20, 15) 3 0.362 0.711 0.475 0.687 0.068 1.000 0.081 1.000 0.365 0.882 0.489 0.930
    (50, 20, 15) 3 0.577 0.607 0.678 0.652 0.068 1.000 0.081 1.000 0.670 0.855 0.821 0.915
    (60, 30, 20) 3 0.303 0.707 0.358 0.721 0.067 1.000 0.079 1.000 0.304 0.865 0.372 0.929
    T = 1.5
    (30, 20, 15) 1 0.279 0.717 0.318 0.764 0.067 1.000 0.079 1.000 0.270 0.872 0.306 0.934
    (50, 20, 15) 1 0.344 0.661 0.417 0.713 0.067 1.000 0.080 1.000 0.339 0.892 0.417 0.907
    (60, 30, 20) 1 0.228 0.738 0.269 0.781 0.066 1.000 0.078 1.000 0.222 0.874 0.264 0.919
    (30, 20, 15) 2 0.275 0.722 0.333 0.716 0.067 0.999 0.080 1.000 0.265 0.873 0.327 0.936
    (50, 20, 15) 2 0.406 0.664 0.411 0.716 0.068 1.000 0.080 1.000 0.412 0.882 0.414 0.930
    (60, 30, 20) 2 0.226 0.746 0.275 0.746 0.066 1.000 0.078 1.000 0.221 0.874 0.267 0.922
    (30, 20, 15) 3 0.321 0.696 0.388 0.728 0.067 1.000 0.080 1.000 0.317 0.877 0.395 0.943
    (50, 20, 15) 3 0.585 0.622 0.638 0.677 0.068 1.000 0.081 1.000 0.673 0.860 0.731 0.927
    (60, 30, 20) 3 0.300 0.730 0.355 0.741 0.067 1.000 0.079 1.000 0.307 0.871 0.366 0.933
    T = 2.5
    (30, 20, 15) 1 0.264 0.718 0.314 0.748 0.067 1.000 0.079 1.000 0.255 0.878 0.303 0.920
    (50, 20, 15) 1 0.332 0.689 0.397 0.722 0.067 1.000 0.079 1.000 0.330 0.877 0.395 0.924
    (60, 30, 20) 1 0.222 0.757 0.271 0.764 0.066 1.000 0.078 1.000 0.215 0.905 0.266 0.913
    (30, 20, 15) 2 0.288 0.695 0.325 0.708 0.067 1.000 0.079 1.000 0.287 0.873 0.318 0.917
    (50, 20, 15) 2 0.369 0.668 0.434 0.697 0.068 1.000 0.080 1.000 0.363 0.874 0.439 0.942
    (60, 30, 20) 2 0.225 0.752 0.264 0.789 0.065 1.000 0.078 1.000 0.216 0.873 0.260 0.923
    (30, 20, 15) 3 0.309 0.716 0.394 0.712 0.067 1.000 0.080 1.000 0.306 0.908 0.408 0.934
    (50, 20, 15) 3 0.603 0.622 0.668 0.667 0.068 1.000 0.081 1.000 0.669 0.852 0.775 0.918
    (60, 30, 20) 3 0.306 0.715 0.362 0.717 0.067 1.000 0.079 1.000 0.309 0.871 0.375 0.931

     | Show Table
    DownLoad: CSV

    From Tables 14 the computational results show that most cases, the Bayesian estimation based on the SE, Linex and GE loss functions is more precise than the ML estimation. Also, when n and m increase, the mean-squared error decreases. An exceptional case occurred when (n, m, k) = (50, 20, 15) , it does not follow the pattern due to the effective number of failures is changed according to the occurred case. The effective sample size in this case may be 15, 20 or 15 < D < 20 . Moreover, a comparison of the results for the informative priors with the corresponding ones, as we would expect. Finally, from the average length and coverage probabilities presented in Tables 58, we see that the estimates behave well in terms of the coverage probabilities and the Bayesian show better performance comparing with the ML estimates in terms of the average width.

    In order to show the performance of the inferential results established for the Burr Type-Ⅻ distribution bases on the generalized progressive hybrid censoring, we consider here the real data set that is used in Example 1, which is reported in Wingo [20]. Wing assumed that Burr Type-Ⅻ distribution is used to fit these lifetime data. The test performed using 30 units but its terminated after the failure of 20 units. We shall use these data to consider the following progressive censoring schemes: Suppose n = 20, m = 18 and R_{i} = 0 for i = 1, ..., 16, R_{17} = R_{18} = 1 , then we would have the following progressive data: 0.1, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.9, 1.2, 1.6, 1.8, 2.3, 2.05, 2.6, and 2.9. We consider different k = 16 , and with different values of T , we have three different generalized PHCS, namely;

    1. Scheme 1: Suppose T = 2 , since T < X_{16:18:20} < X_{18:18:20} , then the experiment would have terminated at X_{16:18:20}, with R_{i}^{\ast } = 0 for i = 1, ..., 15 , R_{16}^{\ast } = 4 , R_{\tau }^{\ast } = 0, and we would have the following data: 0.1, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.9, 1.2, 1.6, 1.8, 2.3, and 2.05.

    2. cheme 2: Suppose T = 2.7 , since X_{16:18:20} < T < X_{18:18:20} , then the experiment would have terminated at T = 2, with R_{i}^{\ast } = 0 for i = 1, ..., 16 , R_{17}^{\ast } = 1 , R_{\tau }^{\ast } = 2 , and we would have the following data: 0.1, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.9, 1.2, 1.6, 1.8, 2.3, 2.05, and 2.6.

    3. Scheme 3: Suppose T = 3.5 , since X_{16:18:20} < X_{18:18:20} < T , then the experiment would have terminated at X_{18:18:20}, with R^{\ast } = R , R_{\tau }^{\ast } = 0, and we would have the following data: 0.1, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.9, 1.2, 1.6, 1.8, 2.3, 2.05, 2.6, and 2.9.

    The ML and Bayesian estimates for the parameters, survival and hazard functions based on the generalized PHCS are obtained and presented in Table 9. The 90\% and 95\% asymptotic confidence interval and the credible intervals are constructed and presented in Table 10. Also, the point predictor and 95\% equi-tailed and the HPD prediction intervals are computed for Y_{s:\ell :N} from the future progressively censored sample of size \ell = 5 from a sample of size N = 10 with progressive censoring scheme S = (0, 2, 1, 2, 0) based on the generated generalized PHCS and two different choices of the hyper parameters as given in Section 7, these data are presented in Table 11.

    Table 9.  The ML and Bayesian estimates of \widehat{\alpha } , \widehat{\beta } , \widehat{R(t) } , and \widehat{h(t)} at selected censoring schemes from the real data set.
    Bayesian
    BS BL BE
    estimator Scheme ML IP NIP IP NIP IP NIP
    \widehat{\alpha } 1 1.082 1.393 1.074 1.378 1.055 1.361 1.020
    2 1.071 1.397 1.073 1.382 1.056 1.365 1.024
    3 1.110 1.406 1.102 1.391 1.084 1.374 1.052
    \widehat{\beta } 1 1.436 0.979 1.435 0.975 1.411 0.967 1.384
    2 1.422 0.981 1.412 0.976 1.392 0.966 1.367
    3 1.469 0.999 1.489 0.994 1.466 0.983 1.441
    \widehat{R(t=1) } 1 0.472 0.386 0.484 0.385 0.482 0.378 0.470
    2 0.476 0.385 0.483 0.384 0.481 0.377 0.471
    3 0.463 0.383 0.474 0.381 0.472 0.374 0.462
    \widehat{h(t=1) } 1 0.777 0.675 0.761 0.671 0.749 0.661 0.710
    2 0.762 0.678 0.748 0.674 0.737 0.663 0.704
    3 0.815 0.694 0.807 0.691 0.796 0.679 0.764

     | Show Table
    DownLoad: CSV
    Table 10.  The C.I's and AL for \widehat{\alpha } , \widehat{\beta } , \widehat{R(t) } , and \widehat{h(t)} at selected censoring schemes from real data set.
    Bayesian
    Asymp.CI IP NIP
    estimator Scheme 90\% 95\% 90\% 95\% 90\% 95\%
    \widehat{\alpha} 1 (0.626, 1.538) (0.538, 1.626) (0.675, 1.579) (0.948, 1.931) (0.678, 1.584) (0.610, 1.691)
    0.913 1.087 0.904 0.983 0.906 1.081
    2 (0.626, 1.515) (0.541, 1.601) (1.011, 1.840) (0.963, 1.921) (0.678, 1.561) (0.624, 1.652)
    0.889 1.060 0.830 0.959 0.883 1.028
    3 (0.659, 1.560) (0.573, 1.646) (1.030, 1.839) (0.971, 1.935) (0.705, 1.612) (0.638, 1.694)
    0.901 1.073 0.809 0.964 0.907 1.056
    \widehat{\beta} 1 (0.932, 1.939) (0.836, 2.036) (0.978, 1.965) (0.727, 1.275) (0.958, 1.888) (0.880, 2.053)
    1.007 1.200 0.987 0.548 0.931 1.173
    2 (0.933, 1.911) (0.840, 2.005) (0.776, 1.215) (0.733, 1.263) (0.961, 1.924) (0.892, 2.020)
    0.978 1.165 0.440 0.530 0.963 1.127
    3 (0.978, 1.960) (0.884, 2.054) (0.784, 1.220) (0.743, 1.297) (1.029, 1.962) (0.953, 2.172)
    0.982 1.170 0.436 0.555 0.933 1.219
    \widehat{St} 1 (0.323, 0.622) (0.294, 0.650) (0.334, 0.626) (0.262, 0.518) (0.333, 0.625) (0.309, 0.655)
    0.299 0.356 0.292 0.256 0.291 0.345
    2 (0.329, 0.623) (0.301, 0.651) (0.279, 0.496) (0.264, 0.512) (0.338, 0.625) (0.317, 0.648)
    0.293 0.350 0.217 0.248 0.287 0.332
    3 (0.319, 0.608) (0.291, 0.636) (0.279, 0.490) (0.261, 0.510) (0.327, 0.613) (0.309, 0.642)
    0.289 0.345 0.210 0.249 0.286 0.334
    \widehat{Ht} 1 (0.400, 1.153) (0.328, 1.225) (0.440, 1.186) (0.473, 0.922) (0.432, 1.152) (0.393, 1.248)
    0.753 0.898 0.745 0.449 0.719 0.855
    2 (0.411, 1.112) (0.344, 1.179) (0.500, 0.874) (0.474, 0.926) (0.442, 1.139) (0.396, 1.220)
    0.701 0.835 0.375 0.451 0.697 0.824
    3 (0.454, 1.176) (0.385, 1.245) (0.516, 0.890) (0.480, 0.939) (0.486, 1.191) (0.446, 1.281)
    0.722 0.860 0.374 0.459 0.705 0.835

     | Show Table
    DownLoad: CSV
    Table 11.  Bayesian point predictor and 95\% ET and HPD prediction intervals Y_{s:\ell :N} for s = 1, ..., \ell .
    IP NIP
    Sch. s \widehat{Y}_{s:N} ET interval HPD interval \widehat{Y}_{s:N} ET interval HPD interval
    Case-1 1 0.077 (0.001, 0.328) (0.055, 0.228) 0.194 (0.008, 0.619) (0.001, 0.142)
    0.327 0.173 0.611 0.142
    2 0.169 (0.012, 0.571) (0.000, 0.462) 0.353 (0.048, 0.928) (0.009, 0.793)
    0.559 0.462 0.88 0.784
    3 0.330 (0.039, 1.032) (0.006, 0.838) 0.580 (0.118, 1.480) (0.047, 1.252)
    0.993 0.832 1.362 1.205
    4 0.633 (0.093, 2.030) (0.023, 1.607) 0.955 (0.228, 2.642) (0.101, 2.129)
    1.937 1.584 2.415 2.028
    5 3.908 (0.270, 28.755) (0.028, 15.792) 4.166 (0.502, 30.960) (0.108, 16.274)
    28.485 15.764 30.457 16.166
    Case-2 1 0.078 (0.001, 0.331) (0.058, 0.309) 0.194 (0.008, 0.621) (0.001, 0.124)
    0.33 0.251 0.613 0.124
    2 0.171 (0.012, 0.574) (0.000, 0.466) 0.353 (0.046, 0.935) (0.008, 0.798)
    0.562 0.466 0.889 0.79
    3 0.333 (0.040, 1.035) (0.006, 0.842) 0.582 (0.116, 1.496) (0.044, 1.264)
    0.995 0.836 1.381 1.22
    4 0.636 (0.095, 2.028) (0.024, 1.608) 0.964 (0.224, 2.679) (0.097, 2.157)
    1.932 1.584 2.455 2.06
    5 3.879 (0.270, 28.090) (0.028, 15.518) 4.254 (0.500, 32.111) (0.104, 16.829)
    27.82 15.49 31.61 16.725
    Case-3 1 0.079 (0.001, 0.332) (0.004, 0.318) 0.188 (0.008, 0.596) (0.000, 0.159)
    0.331 0.314 0.588 0.158
    2 0.173 (0.013, 0.569) (0.000, 0.464) 0.341 (0.047, 0.887) (0.010, 0.762)
    0.557 0.464 0.839 0.752
    3 0.331 (0.042, 1.013) (0.007, 0.829) 0.556 (0.116, 1.392) (0.049, 1.189)
    0.971 0.821 1.276 1.14
    4 0.626 (0.098, 1.952) (0.027, 1.561) 0.905 (0.221, 2.422) (0.105, 1.984)
    1.854 1.534 2.201 1.879
    5 3.707 (0.276, 24.449) (0.034, 13.319) 3.823 (0.485, 25.068) (0.117, 13.824)
    24.173 13.285 24.583 13.707

     | Show Table
    DownLoad: CSV

    The Bayesian results are computed based the joint prior density in (4.1) with two different choices of the hyper parameters (a, b, c, d) , namely,

    1. Informative prior (IP):\; a = 24.2 , b = 29.2 , c = 18.432 and d = 15.0685 (These values have obtained by letting the mean and variance for the marginal prior distribution of \alpha equal \hat{ \alpha}_{ML} in case of complete data and 0.05, respectively. Also, the mean and variance for the marginal prior distribution of \beta equal \hat{\beta}_{ML} of the complete data and 0.05, respectively, then we solve the resulting two equations for each parameter).

    2. Non-Informative prior (NIP):a = b = c = d = 0 .

    Form Table 9, we noted that MLE is slightly larger than the Bayesian estimates based on NIP in the case of \alpha and \beta . When it is compared with the Bayesian estimates based on the IP, we noted that its value is smaller than the Bayesian estimates of \alpha and larger than the Bayesian estimates of \beta . From Table 10, in most cases, we noted the average length of the intervals is smaller in scheme 2 but not far from the other schemes. From Table 11, the the highest posterior density intervals of unobserved future sample gives accurate results than those of equi-tailed intervals, while the point predictor values are met our expectation compared with the original real data.

    In general, we can conclude that the generalized PHCS is applicable when we are going to test the Burr Type-Ⅻ failure times of the electronic components. It is more suitable for saving the time and cost of the lifetime experiments.

    The ML and Bayesian estimates of the unknown parameters as well as the survival and hazard functions of the Burr Type-Ⅻ lifetime distribution are obtained when the observed sample is a generalized PHCS sample. The existence and uniqueness of the MLEs are investigated. In the Bayesian approach Squared error, Linex and general entropy loss functions based in informative and non-informative prior distributions are considered. The 90\% and 95\% asymptotic and credible confidence intervals are also constructed for the parameters as well as for the survival and hazard functions. The Bayesian point and interval prediction of future order statistics from the same sample were also developed for a progressive Type-Ⅱ of an unpredictable future sample. From the numerical results, we list the following concluding remarks:

    1. The MLEs of the unknown parameters of the Burr Type-Ⅻ lifetime distribution based on the generalized PHCS sample are exist and unique if and only if there are some values below 1 in the sample.

    2. In most cases, the Bayesian estimates depending on the informative priors perform better then the MLEs.

    3. The results of ML estimates can be seen in Tables 14, as predicted, very similar to that of non-informational priors based on Bayesian estimators. Thus it is often easier to use the ML rather than the Bayesian estimators when we do not have preliminary knowledge about unknown parameters because the Bayesian estimators are computing more expensive.

    4. In most cases, as n and m increase, the MSE decreases.

    5. The average length of the confidence intervals is decreasing when the T increases. Further, the comparison of the results for the informative priors with those for non-informative priors indicates that the previous produces more accurate results, as we expect. In addition, when n and m increasing, the average length decreases.

    6. The credible intervals perform well as compared with the asymptotic confidence intervals.

    7. In all cases of confidence intervals, the 95% is wider than the 90% ones, as expected.

    All in all, the proposed techniques depending on the generalized PHCS can be applied for testing the failure times of the products and electronic components which have Burr Type-Ⅻ failure times. Which will guarantee saving the testing time and decreasing the testing cost with a good efficiency of the analysis.

    The authors are grateful to the anonymous referee for a careful checking of the details and for helpful comments that improved this paper. Also, The authors would like to thank the Deanship of Scientific Research at King Saud University for its funding this Research Group (RG-1435-056).

    Authors declare there is no conflict of interest.



    [1] N. Balakrishnan, R. Aggarwala, Progressive censoring: theory, methods, and applications, Springer Science & Business Media, 2000.
    [2] N. Balakrishnan, Progressive censoring methodology: an appraisal, Test, 16 (2007), 211. doi: 10.1007/s11749-007-0061-y
    [3] E. Cramer, G. Iliopoulos, Adaptive progressive Type-Ⅱ censoring, Test, 19 (2010), 342–358. doi: 10.1007/s11749-009-0167-5
    [4] M. Z. Raqab, A. Asgharzadeh, R. Valiollahi, Prediction for Pareto distribution based on progressively Type-Ⅱ censored samples, Comput. Stat. Data Anal., 54 (2010), 1732–1743. doi: 10.1016/j.csda.2010.02.005
    [5] M. M. El-Din, A. R. Shafay, One-and two-sample Bayesian prediction intervals based on progressively Type-Ⅱ censored data, Stat. Pap. (Berl), 54 (2013), 287–307. doi: 10.1007/s00362-011-0426-x
    [6] N. Balakrishnan, A. C. Cohen, Order statistics & inference: estimation methods, Academic Press, San Diego, 1991.
    [7] D. Kundu, A. Joarder, Analysis of Type-Ⅱ progressively hybrid censored data, Comput. Stat. Data Anal., 50, (2006), 2509–2528.
    [8] A. Childs, B. Chandrasekar, N. Balakrishnan, Exact likelihood inference for an exponential parameter under progressive hybrid censoring schemes, In: Statistical Models and Methods for Biomedical and Technical Systems, Birkhäuser Boston, 2008,319–330.
    [9] C. T. Lin, C. C. Chou, Y. L. Huang, Inference for the Weibull distribution with progressive hybrid censoring, Comput. Stat. Data Anal., 56 (2012), 451–467. doi: 10.1016/j.csda.2011.09.002
    [10] C. T. Lin, Y. L. Huang, On progressive hybrid censored exponential distribution, J. Stat. Comput. Simul., 82 (2012), 689–709. doi: 10.1080/00949655.2010.550581
    [11] F. Hemmati, E. Khorram, Statistical analysis of the log-normal distribution under Type-Ⅱ progressive hybrid censoring schemes, Commun. Stat. Simul. Comput., 42 (2013), 52–75. doi: 10.1080/03610918.2011.633195
    [12] M. M. El-Din, Y. Abdel-Aty, M. H. Abu-Moussa, Statistical inference for the Gompertz distribution based on Type-Ⅱ progressively hybrid censored data, Commun. Stat. Simul. Comput., 46 (2017), 6242–6260. doi: 10.1080/03610918.2016.1202270
    [13] Y. Cho, H. Sun, K. Lee, Exact likelihood inference for an exponential parameter under generalized progressive hybrid censoring scheme, Stat. Methodol., 23 (2015), 18–34. doi: 10.1016/j.stamet.2014.09.002
    [14] M. M. El-Din, A. R. Shafay, M. Nagy, Statistical inference under adaptive progressive censoring scheme, Comput. Stat., 33 (2018), 31–74. doi: 10.1007/s00180-017-0745-z
    [15] M. M. El-Din, M. Nagy, M. H. Abu-Moussa, Estimation and prediction for gompertz distribution under the generalized progressive hybrid censored data, Ann. Data Sci., 6 (2019), 673–705. doi: 10.1007/s40745-019-00199-3
    [16] M. H. Abu-Moussa, M. M. El-Din, M. A. Mosilhy, Statistical inference for Gompertz distribution using the adaptive-general progressive Type-Ⅱ censored samples, Am. J. Math. Manag. Sci., (2020), 1–23.
    [17] K. Lee, H. Sun, Y. Cho, Exact likelihood inference of the exponential parameter under generalized Type Ⅱ progressive hybrid censoring, J. Korean Stat. Soc., 45 (2016), 123–136. doi: 10.1016/j.jkss.2015.08.003
    [18] P. Parviz, H. Panahi, Classical and Bayesian inference for the Burr Type Ⅻ distribution under generalized progressive Type Ⅰ hybrid censored sample, J. Stat. Theory App., 19 (2020), 547–557.
    [19] R. Calabria, G. Pulcini, Point estimation under asymmetric loss functions for left-truncated exponential samples, Commun. Stat. Theory Methods, 25 (1996), 585–600. doi: 10.1080/03610929608831715
    [20] D. R. Wingo, Maximum likelihood estimation of Burr Ⅻ distribution parameters under Type-Ⅱ censoring, Microelectron. Reliab., 33 (1993), 1251–1257. doi: 10.1016/0026-2714(93)90126-J
    [21] W. H. Greene, Econometric analysis 4th edition, International edition, New Jersey: Prentice Hall, 2000,201–215.
    [22] A. Agresti, Logistic regression, Categorical data analysis, Wiley & Sons, Inc, 2002.
    [23] E. K. Al-Hussaini, Z. F. Jaheen, Bayesian estimation of the parameters, reliability and failure rate functions of the Burr Type Ⅻ failure model, J. Stat. Comput. Simul., 41 (1992), 31–40. doi: 10.1080/00949659208811389
    [24] N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, E. Teller, Equation of state calculations by fast computing machines, J. Chem. Phys., 21 (1953), 1087–1092. doi: 10.1063/1.1699114
    [25] I. Basak, P. Basak, N. Balakrishnan, On some predictors of times to failure of censored items in progressively censored samples, Comput. Stat. Data Anal., 50 (2006), 1313–1337. doi: 10.1016/j.csda.2005.01.011
    [26] N. Balakrishnan, A. Childs, B. Chandrasekar, An efficient computational method for moments of order statistics under progressive censoring, Stat. Probab. Lett., 60 (2002), 359–365. doi: 10.1016/S0167-7152(02)00267-5
    [27] H. K. T. Ng, D. Kundu, P. S. Chan, Statistical analysis of exponential lifetimes under an adaptive TypeⅡ progressive censoring scheme, Nav. Res. Logist., 56 (2009), 687–698. doi: 10.1002/nav.20371
  • math-06-09-564-Supplementary.nb
  • This article has been cited by:

    1. 2023, 9780123983879, 361, 10.1016/B978-0-12-398387-9.00023-4
    2. M. Nagy, Adel Fahad Alrasheedi, The lifetime analysis of the Weibull model based on Generalized Type-I progressive hybrid censoring schemes, 2022, 19, 1551-0018, 2330, 10.3934/mbe.2022108
    3. M. Nagy, M. E. Bakr, Adel Fahad Alrasheedi, Firdous Khan, Analysis with Applications of the Generalized Type-II Progressive Hybrid Censoring Sample from Burr Type-XII Model, 2022, 2022, 1563-5147, 1, 10.1155/2022/1241303
    4. M. Nagy, M. H. Abu-Moussa, Adel Fahad Alrasheedi, A. Rabie, Expected Bayesian estimation for exponential model based on simple step stress with Type-I hybrid censored data, 2022, 19, 1551-0018, 9773, 10.3934/mbe.2022455
    5. Abd El-Raheem M. Abd El-Raheem, Mona Hosny, Mahmoud H. Abu-Moussa, On Progressive Censored Competing Risks Data: Real Data Application and Simulation Study, 2021, 9, 2227-7390, 1805, 10.3390/math9151805
    6. Omar Alzeley, Ehab M. Almetwally, Ahmed M. Gemeay, Huda M. Alshanbari, E. H. Hafez, M. H. Abu-Moussa, Ahmed Mostafa Khalil, Statistical Inference under Censored Data for the New Exponential-X Fréchet Distribution: Simulation and Application to Leukemia Data, 2021, 2021, 1687-5273, 1, 10.1155/2021/2167670
    7. M. Nagy, Adel Fahad Alrasheedi, Behnaz Ghoraani, Estimations of Generalized Exponential Distribution Parameters Based on Type I Generalized Progressive Hybrid Censored Data, 2022, 2022, 1748-6718, 1, 10.1155/2022/8058473
    8. Yeongjae Seong, Kyeongjun Lee, Exact Likelihood Inference for Parameter of Exponential Distribution under Combined Generalized Progressive Hybrid Censoring Scheme, 2022, 14, 2073-8994, 1764, 10.3390/sym14091764
    9. Manal M. Yousef, Amal S. Hassan, Huda M. Alshanbari, Abdal-Aziz H. El-Bagoury, Ehab M. Almetwally, Bayesian and Non-Bayesian Analysis of Exponentiated Exponential Stress–Strength Model Based on Generalized Progressive Hybrid Censoring Process, 2022, 11, 2075-1680, 455, 10.3390/axioms11090455
    10. M. Nagy, Adel Fahad Alrasheedi, Muye Pang, Classical and Bayesian Inference Using Type-II Unified Progressive Hybrid Censored Samples for Pareto Model, 2022, 2022, 1754-2103, 1, 10.1155/2022/2073067
    11. Amal S. Hassan, Rana M. Mousa, Mahmoud H. Abu-Moussa, Analysis of Progressive Type-II Competing Risks Data, with Applications, 2022, 43, 1995-0802, 2479, 10.1134/S1995080222120149
    12. N. Balakrishnan, Erhard Cramer, Debasis Kundu, 2023, 9780123983879, 207, 10.1016/B978-0-12-398387-9.00015-5
    13. Suraj Yadav, Sanjay Kumar Singh, Arun Kaushik, Parameter estimation of Burr type-III distribution under generalized progressive hybrid censoring scheme, 2024, 2520-8756, 10.1007/s42081-024-00263-0
    14. M. Nagy, Expected Bayesian estimation based on generalized progressive hybrid censored data for Burr-XII distribution with applications, 2024, 14, 2158-3226, 10.1063/5.0184910
    15. Magdy Nagy, Mohamed Ahmed Mosilhy, Ahmed Hamdi Mansi, Mahmoud Hamed Abu-Moussa, An Analysis of Type-I Generalized Progressive Hybrid Censoring for the One Parameter Logistic-Geometry Lifetime Distribution with Applications, 2024, 13, 2075-1680, 692, 10.3390/axioms13100692
  • Reader Comments
  • © 2021 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(3201) PDF downloads(164) Cited by(15)

Figures and Tables

Figures(3)  /  Tables(11)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog