Processing math: 100%
Research article

Reservoir prescriptive management combining electric resistivity tomography and machine learning

  • In this paper, I introduce a comprehensive workflow aimed at optimizing oil production and CO2 geological storage. I show that the same methodology can be applied to different categories of problems: a) real-time reservoir fluid mapping for predicting and delaying water breakthrough time as far as possible in oil production; b) real-time CO2 mapping for maximizing the sweep efficiency and storage capacity of CO2 in geological formations. Despite their intrinsic differences, these types of problems share common aspects, issues and possible solutions. In both cases, various geophysical techniques can be applied, including Electric Resistivity Tomography (briefly ERT) for accurate fluid mapping and monitoring. This method is highly effective and sensitive for detecting the type of fluid and for estimating saturation in the geological formations. The robustness and the accuracy of the ERT models increase if densely spaced electrodes layouts are permanently deployed into the production and injection wells. In the first part of the paper, I discuss how in both scenarios of oil production and CO2 storage, we can apply time-lapse borehole ERT method for mapping fluids in the reservoir. Next, I discuss how to apply various techniques of time-series analysis for predicting the evolution of the fluids distribution over time. Finally, using Q-Learning, that is a specific Reinforcement Learning method, I discuss how we can optimize the decisional workflow using our models about past, real-time and predicted fluids displacement. The result is the definition of a "best policy" addressed to both problems of optimized oil production and safe CO2 geological storage. In the second part of the paper, I show benefits and limitations of my approach with the support of synthetic tests.

    Citation: Paolo Dell'Aversana. Reservoir prescriptive management combining electric resistivity tomography and machine learning[J]. AIMS Geosciences, 2021, 7(2): 138-161. doi: 10.3934/geosci.2021009

    Related Papers:

    [1] Hassan Okasha, Mazen Nassar, Saeed A. Dobbah . E-Bayesian estimation of Burr Type XII model based on adaptive Type-Ⅱ progressive hybrid censored data. AIMS Mathematics, 2021, 6(4): 4173-4196. doi: 10.3934/math.2021247
    [2] Bing Long, Zaifu Jiang . Estimation and prediction for two-parameter Pareto distribution based on progressively double Type-II hybrid censored data. AIMS Mathematics, 2023, 8(7): 15332-15351. doi: 10.3934/math.2023784
    [3] Xue Hu, Haiping Ren . Statistical inference of the stress-strength reliability for inverse Weibull distribution under an adaptive progressive type-Ⅱ censored sample. AIMS Mathematics, 2023, 8(12): 28465-28487. doi: 10.3934/math.20231457
    [4] Ahmed R. El-Saeed, Ahmed T. Ramadan, Najwan Alsadat, Hanan Alohali, Ahlam H. Tolba . Analysis of progressive Type-Ⅱ censoring schemes for generalized power unit half-logistic geometric distribution. AIMS Mathematics, 2023, 8(12): 30846-30874. doi: 10.3934/math.20231577
    [5] Nora Nader, Dina A. Ramadan, Hanan Haj Ahmad, M. A. El-Damcese, B. S. El-Desouky . Optimizing analgesic pain relief time analysis through Bayesian and non-Bayesian approaches to new right truncated Fréchet-inverted Weibull distribution. AIMS Mathematics, 2023, 8(12): 31217-31245. doi: 10.3934/math.20231598
    [6] Amal S. Hassan, Najwan Alsadat, Oluwafemi Samson Balogun, Baria A. Helmy . Bayesian and non-Bayesian estimation of some entropy measures for a Weibull distribution. AIMS Mathematics, 2024, 9(11): 32646-32673. doi: 10.3934/math.20241563
    [7] Abdulhakim A. Al-Babtain, Amal S. Hassan, Ahmed N. Zaky, Ibrahim Elbatal, Mohammed Elgarhy . Dynamic cumulative residual Rényi entropy for Lomax distribution: Bayesian and non-Bayesian methods. AIMS Mathematics, 2021, 6(4): 3889-3914. doi: 10.3934/math.2021231
    [8] Mohamed S. Eliwa, Essam A. Ahmed . Reliability analysis of constant partially accelerated life tests under progressive first failure type-II censored data from Lomax model: EM and MCMC algorithms. AIMS Mathematics, 2023, 8(1): 29-60. doi: 10.3934/math.2023002
    [9] Hanan Haj Ahmad, Ehab M. Almetwally, Dina A. Ramadan . A comparative inference on reliability estimation for a multi-component stress-strength model under power Lomax distribution with applications. AIMS Mathematics, 2022, 7(10): 18050-18079. doi: 10.3934/math.2022994
    [10] Baria A. Helmy, Amal S. Hassan, Ahmed K. El-Kholy, Rashad A. R. Bantan, Mohammed Elgarhy . Analysis of information measures using generalized type-Ⅰ hybrid censored data. AIMS Mathematics, 2023, 8(9): 20283-20304. doi: 10.3934/math.20231034
  • In this paper, I introduce a comprehensive workflow aimed at optimizing oil production and CO2 geological storage. I show that the same methodology can be applied to different categories of problems: a) real-time reservoir fluid mapping for predicting and delaying water breakthrough time as far as possible in oil production; b) real-time CO2 mapping for maximizing the sweep efficiency and storage capacity of CO2 in geological formations. Despite their intrinsic differences, these types of problems share common aspects, issues and possible solutions. In both cases, various geophysical techniques can be applied, including Electric Resistivity Tomography (briefly ERT) for accurate fluid mapping and monitoring. This method is highly effective and sensitive for detecting the type of fluid and for estimating saturation in the geological formations. The robustness and the accuracy of the ERT models increase if densely spaced electrodes layouts are permanently deployed into the production and injection wells. In the first part of the paper, I discuss how in both scenarios of oil production and CO2 storage, we can apply time-lapse borehole ERT method for mapping fluids in the reservoir. Next, I discuss how to apply various techniques of time-series analysis for predicting the evolution of the fluids distribution over time. Finally, using Q-Learning, that is a specific Reinforcement Learning method, I discuss how we can optimize the decisional workflow using our models about past, real-time and predicted fluids displacement. The result is the definition of a "best policy" addressed to both problems of optimized oil production and safe CO2 geological storage. In the second part of the paper, I show benefits and limitations of my approach with the support of synthetic tests.



    Especially in reliability analysis and survival analysis, the scheme of progressive Type-Ⅱ censoring has been used most commonly. It is preferable than the classical Type-Ⅱ censoring scheme. In several real-life areas, progressive censorship is beneficial, including, industrial, life research and clinical settings applications. It permits the removal of the experimental units surviving until the test finishes. Assume an experiment of lifetime testing with n units are placed on the test and is not desirable to detect all failure times under the limitation of cost and time, so only part of failures of the units are observed, such sample is called censored sample. Only m(m<n) units of failure times can be observed in progressive censoring schemes. At the occurrence of the first failure, R1 of the n1 surviving units are randomly selected and removed from the test and at the second observed failure, R2 of the nR12 surviving units are randomly selected and removed and so on. Finally, at the time of mth failure, the experiment will stop, then the reset survived units Rm=nR1Rm1m are excluded from the test. The censoring sizes {Ri,i=1,..,m1} are prefixed. We shall show the m ordered times of failure thus observed by X1:m:n,,Xm:m:n. It is clear that n=m+mk=1Rk. These ordered failure times which are detected from this form of censoring are called progressively Type-Ⅱ censored order statistics. Various authors have researched the order statistics and progressive Type-Ⅱ censoring features of such a life test that is progressively censored. Some primary referred works are Balakrishnan and Aggarwala [1], Balakrishnan [2], Cramer and Iliopoulos [3], Raqab et al.[4], Mohie El-Din and Shafay [5], and Balakrishnan and Cohen [6].

    The drawbacks of the progressive censorship scheme of Type-Ⅱ are that if the units are highly reliable, the experiment time can be very long. So, works of Kundu and Joarder [7] and Childs et al. [8] treat this problem by proposing a new type of censoring at which the stopping time of the experiment is min{Xm:m:n,T}, where T(0,) is pre-fixed beforehand. This type called progressive hybrid censoring scheme (PHCS). The total time of the experiment under PHCS will not exceed T. Several authors have studied the PHCS, see for instance the works Lin et al. in [9] and [10], Hemmati and Khorram in [11], and Moihe El-Din et al. [12].

    The downside of the PHCS, on the other hand, is that it can not be implemented when so less failures can be detected before T. Due this cause, Cho et al. [13] proposed a general type of censoring called generalized PHCS in which the lower number of failures are pre-determined. The experiment of life-testing would save time and the expense of failures on the basis of this censoring scheme. Furthermore, due to further failures of experiment, statistical efficiency estimates are improved. The following section explains the comprehensive designation of the generalized PHCS and its advantages. One of the important special cases of the generalized PHCS is the adaptive progressive censoring. This type is the first special case of the generalized PHCS as will be shown later in Section 2. For recent work on this topic, see for example, Moihe El-Din et al. [14], Mohie El-Din et al. [15], Abu-Moussa et al. [16], Lee et al. in [17] and Parviz and Panahi in [18].

    The contribution in this paper is that we consider the developing of the inference techniques for Burr Type-Ⅻ data based on the generalized PHCS, which is not considered in the literature. The Burr Type-Ⅻ distribution has the following density function (PDF) and distribution function (CDF), respectively given by

    f(x|α,β)=αβxβ1(1+xβ)(α+1),x>0,α>0,β>0, (1.1)
    F(x|α,β)=1(1+xβ)α,x>0,α>0,β>0. (1.2)

    The survival and hazard functions are given, respectively, by

    R(x|α,β)=ˉF(x|α,β)=1F(x|α,β),  h(x|α,β)=αβxβ1(1+xβ)1,x>0,α>0,β>0. (1.3)

    The Bayesian estimate ˆθBS relative to the squared error loss function is given by the mean of the posterior distribution

    ˆθBS=Eθ|x_[θ] (1.4)

    Assuming that the minimum loss exists at ˆθ=θ, it is possible to express the LINEX loss function as

    LBL(ˆθ,θ)=exp[υ(ˆθθ)]υ(ˆθθ)1, υ0, (1.5)

    The sign and magnitude of the υ shape parameter are the directions and degrees of asymmetry. It is easily seen that the (unique) Bayesian estimator of θ, denoted by ˆθredBL under the LINEX loss function, the value ˆθBL which minimizes Eθ|X_[LBL(ˆθ,θ)] is given by

    ˆθBL=1υln{Eθ|x_[exp(υθ)]}, (1.6)

    given that the expectation involved Eθ|x_[exp(υθ)] is finite. Calabria and Pulcini [19] have addressed the issue of selecting the value of the v parameter. The general entropy (GE) loss function is another widely used asymmetric loss function, it is given by

    LBE(ˆθ,θ)(ˆθθ)κκln(ˆθθ)1, (1.7)

    for κ>0, positive error has a more extreme impact than negative error, with negative errors being more severe than positive for κ<0. In this case, the Bayesian estimate ˆθBE relative to the GE loss function is given by

    ˆθBE={Eθ|x_[θ]κ}1κ. (1.8)

    provided that the involved expectation Eθ|x_[θ]κ is finite. It can be shown that, when κ=1, the Bayesian estimate in (1.8) coincides with the Bayesian estimate under the weighted squared error loss function. Similarly, when κ=1, the Bayesian estimate in (1.8) coincides with the Bayesian estimate under the SE loss function.

    The remainder of this paper is structured as follows: A summary of the model of generalized PHCS is provided in Section 2. Section 3 extracts the maximum likelihood (ML) estimates with their uniqueness property, while the Bayesian estimates for the unknown parameters, survival and hazard functions under three loss functions are derived in Section 4. Section 5 stems from the Bayesian single-sample prediction for all censoring stage failure times of all units withdrawn. While in Section 6 the Bayesian prediction for progressive order statistics from an unnoticed future sample of the same distribution. Simulation studies are conducted in section 7 for comparing the the efficiency of the proposed inferential techniques. In Section 8, data set is used for real life to demonstrate the theoretical findings. Finally, the paper is concluded in Section 9.

    Consider a life study in which n equivalent units are tested. Let us denote to the lifetimes coming from a distribution with CDF, F(x|α,β) and PDF, f(x|α,β) by X1,X2,...,Xn. The generalized PHCS is as follows: Let T>0 and k,m{1,2,...,n} are pre-fixed integers in which k<m with the pre-determined censoring scheme R=(R1,R2,...,Rm) satisfying n=m+R1++Rm. At the occurrence of first failure, R1 of the remaining units are eliminated randomly. At the occurrence of the second failure R2, of the surviving units are eliminated from the experiment and so on until the termination time T=max{Xk:m:n,min{Xm:m:n,T}} is reached, at this moment the reset surviving units are eliminated from the test. The "generalized PHCS" changes PHCS by permitting the experiment to proceed beyond T, if only a few failures up to T are observed. Ideally, the experimenters would like to observe m failures within this system, but they will detect at least k failure. Let D indicate the number of failures observed up to T (see Figure 1).

    Figure 1.  Schematic representation of generalized progressive hybrid censoring scheme.

    As mentioned above, one of the following types of observations is given under the generalized PHCS:

    1. Assume that the kth failure time is coming after T, then the termination of the experiment occurs at Xk:m:nand the observations are {X1:m:n<...<Xk:m:n}.

    2. Assume that T is reached after the kth failure and before the mth failure. In this case, the termination time is T and we will observe {X1:m:n<...<Xk:m:n<Xk+1:m:n<...<XD:m:n}.

    3. Assume that the mth failure detected after kth failure and before T, then the termination time is Xm:m:n and we will observe {X1:m:n<...<Xk:m:n<Xk+1:m:n<...<Xm:m:n}.

    Now, the joint probability density function based on the generalized PHCS for all cases are given by:

    fX_(x_)=[Di=1mj=i(Rj+1)]Di=1f(xi:D:n)[ˉF(xi:D:n)]Ri[ˉF(T)]Rτ, (2.1)

    where Rj is the jth value of the vector R,

    R={(R1,,RD,0,...,0,Rk=nkDj=1Rj),ifT<Xk:m:n<Xm:m:n,(R1,,RD),ifXk:m:nT<Xm:m:n,(R1,,Rm),ifXk:m:n<Xm:m:nT, (2.2)

    Rτ is the number of units that are removed at T, given by

    Rτ={0,ifT<Xk:m:n<Xm:m:n,nDDj=1Rj,ifXk:m:nT<Xm:m:n,0,ifXk:m:n<Xm:m:nT, (2.3)
    D={kifT<Xk:m:n<Xm:m:n,DifXk:m:nT<Xm:m:n,mifXk:m:n<Xm:m:nT, (2.4)

    $$ and

    x_={(x1:m:n,...,xk:m:n),ifT<Xk:m:n<Xm:m:n,(x1:m:n,...,xD:m:n),ifXk:m:nT<Xm:m:n,(x1:m:n,...,xm:m:n),ifXk:m:n<Xm:m:nT. (2.5)

    The likelihood function of α,β under the generalized PHCS can be derived using (1.2) and (1.1) in (2.1), as

    L(α,β|x_)=[Di=1mj=i(Rj+1)]αDβDDi=1xβ1i1+xβiexp[αW(β|x_)], (2.6)

    where W(β|x_)=Di=1(Ri+1)ln(1+xβi)+Rτln(1+Tβ) and xi=xi:D:n for simplicity of notation.

    The corresponding log-likelihood function is obtained from (2.6) as

    lnL(α,β|x_)=const.+D(lnα+lnβ)+(β1)Di=1lnxiDi=1ln(1+xβi)αW(β|x_), (3.1)

    equating the first derivatives of (3.1) with respect to β and α to zero, we get

    lnL(α,β|x_)α=DαW(β|x_)=0, (3.2)
    lnL(α,β|x_)β=Dβ+Di=1lnxiDi=1{α(Ri+1)+1}xβilnxi(1+xβi)αRτTβlnT(1+Tβ)=0, (3.3)

    ML estimates of the parameters α and β, ˆαML and ˆβML respectively, can be obtained by solving these two likelihood Eqs (3.2) and (3.3). We have employed the Newton-Raphson iteration method to evaluate ˆαMLand ˆβML. By using the invariance property the ML estimates of the corresponding survival and hazard functions are then given, respectively, by

    ˆRML(t)=(1+tˆβML)ˆαML, (3.4)
    ˆhML(t)=ˆαMLˆβMLtˆβML1(1+tˆβML)1. (3.5)

    The MLE estimate for α can be obtained in an explicit form depending on ˆβML from (3.2) as follows,

    ˆαML=DW(ˆβML|x_). (3.6)

    Then, ^αML is exist and unique if ^βML exist and unique. Now, substituting by (3.6) in (3.3), we get

    J(β)=Dβ+Di=1lnxi(1+xβi)DDi=1(Ri+1)xβilnxi(1+xβi)+RτTβlnT(1+Tβ)Di=1(Ri+1)ln(1+xβi)+Rτln(1+Tβ). (3.7)

    The MLE of β is obtained by solving the non-linear equation J(β)=0 in β. The question now, does the MLE of β is exist and unique? the answer of this question is related by the behavior of the function J(β). It is necessary to require at least two distinct observations xixj for some ij in the sample, to can estimate the parameters α and β jointly. The following theorem shows the requirements that is necessary for the existence and uniqueness of ˆβML.

    Theorem 3.1. Let x1x2...xD be the data sample with at least two distinct values, then the MLE of ˆβ, and hence for ˆα, exist and unique if and only if xi<1 for some i(1iD).

    Proof. The idea of the proof is to show that J(β) is a decreasing function with J(0)>0 and J()<0, which mean that J(β) has a unique root in (0,), therefore ˆα also is exist and unique.

    Now, after a straightforward algebraic manipulations, we can prove that

    J(β=0)= and J()=Di=1Ii(xi)lnxi, (3.8)

    where

    Ii(xi)={1if0<xi<1,0ifxi1. (3.9)

    Its obvious that, J()<0 if and only if xi<1 for some 1iD. Therefore, there exists at least one finite solution for the equation J(β)=0. Now, by showing that the function J(β) is monotone decreasing in β, by showing that its derivative J(β) is less than zero, then it follows ˆβML is exist and unique.

    For more details about the existence and uniqueness for the parameters of Burr Type-Ⅻ distribution in case of Type-Ⅱ censoring, see Wingo [20].

    Example 1. In this example, a real data set represent the time to failure (in months) for electronic components on a test. These data was reported in Wingo [20]. Wing assumed that Burr Type-Ⅻ distribution is used to fit these lifetime data. The test performed using 30 units but its terminated after the failure of 20 units. The data is given as follows,

    0.10.10.20.30.40.50.50.60.70.80.90.91.21.61.82.32.52.62.93.1

    Here, we use these data to generate a generalized PHCS using the following steps, as follows

    1. Case-Ⅰ Let T=1,k=15,m=20 and n=30, then (T<Xk<Xm) and the termination time is xk=1.8. The failure times are x={0.1,0.1,0.2,0.3,0.4,0.5,0.5,0.6,0.7,0.8,0.9,0.9,1.2,1.6,1.8} with censoring scheme R={0,1,0,0,2,0,0,0,3,0,0,0,0,0,9} and RT=0.

    2. Case-Ⅱ Let T=2.55,k=15,m=20 and n=30, then (Xk<T<Xm) and the termination time is T. The failure times are x={0.1,0.1,0.2,0.3,0.4,0.5,0.5,0.6,0.7,0.8,0.9,0.9,1.2,1.6,1.8,2.3,2.5} with censoring scheme R={0,1,0,0,2,0,0,0,3,0,0,0,0,2,0,0,0} and RT=5.

    3. Case-Ⅲ Let T=3.5,k=15,m=20 and n=30, then (Xk<Xm<T) and the termination time is Xm=3.1. The failure times are x={0.1,0.1,0.2,0.3,0.4,0.5,0.5,0.6,0.7,0.8,0.9,0.9,1.2,1.6,1.8,2.3,2.5,2.6,2.9,3.1}, with censoring scheme R={0,1,0,0,2,0,0,0,3,0,0,0,0,2,0,0,0,0,0,2} and RT=0.

    Figure 2 shows that the function J(β) is a negative function in all cases, while Figure 3 shows that the function J(β) is a monotone decreasing function with only one root for J(β)=0.

    Figure 2.  The graph of J(β) for (a) Case-Ⅰ, (b) Case-Ⅱ, and (c) Case-Ⅲ.
    Figure 3.  The graph of J(β) for (a) Case-Ⅰ, (b) Case-Ⅱ and (c) Case-Ⅲ.

    This example shows that the MLE of β is exist and unique and hence α, where ˆαML equals 0.763076,0.774599 and 0.853238 for Case-Ⅰ, Case-Ⅱ and Case-Ⅲ, respectively.

    For large D, the observed Fisher information matrix of the parameters α and β is given by

    I(ˆα,ˆβ)=[2lnL(α,β|x_)α22lnL(α,β|x_)αβ2lnL(α,β|x_)βα2lnL(α,β|x_)β2](ˆαML,ˆβML) (3.10)

    where

    2lnL(α,β|x_)α2=Dα2,
    2lnL(α,β|x_)β2=Dβ2Di=1[α(Ri+1)+1][(lnxi)2xβi(1+xβi)2],
    2lnL(α,β|x_)αβ=[Di=1(Ri+1)xβilnxi(1+xβi)],

    and a 100(1γ)% two-sided approximate confidence intervals for the parameters α and β are then

    (ˆαzγ/2V(ˆα),ˆα+zγ/2V(ˆα)), (3.11)

    and

    (ˆβzγ/2V(ˆβ),ˆβ+zγ/2V(ˆβ)), (3.12)

    respectively, where V(ˆα) and V(ˆβ)are the estimated variances of ˆαML and ˆβML, which are given by the first and the second, diagonal element of I1(ˆα,ˆβ), and zγ/2 is the upper (γ/2) percentile of standard normal distribution.

    Greene [21] used the delta method to construct the approximate confidence intervals for the survival and hazard functions depending in the MLEs. This method is used here in this subsection for calculating the linear approximation of that function, and then calculated the variance of the simpler linear function that can be used for large sample inference, see Greene [21] and Agresti [22].

    G1=[R(t)αR(t)β,]andG1=[h(t)αh(t)β] (3.13)

    where

    R(t)α=(1+tβ)αln(1+tβ),R(t)β=αtβ(1+tβ)(α+1)ln(t),
    h(t)α=βtβ1(1+tβ)1,

    and

    h(t)β=α{(1+tβ)[tβ1+βtβ1ln(t)]βt2β1ln(t)}(1+tβ)2,

    Then, the approximate estimates of V(ˆR(t)) and V(ˆh(t))are given, respectively, by

    V(ˆR(t))[Gt1I1(α,β)G1](ˆαML,ˆβML),V(ˆh(t))[Gt2I1(α,β)G2](ˆαML,ˆβML),

    where Gti is the tranpose of Gi, i=1,2. These results yields the approximate confidence intervals for R(t) and h(t) as:

    (ˆR(t)zγ/2V(ˆR(t)),ˆR(t)+zγ/2V(ˆR(t))), (3.14)

    and

    (ˆh(t)zγ/2V(ˆh(t)),ˆh(t)+zγ/2V(ˆh(t))). (3.15)

    Under the assumption that both parameters α and β are unknown, we may consider the joint prior density function of α and β which was suggested by Al-Hussaini and Jaheen [23] given by

    π(α,β)  αa1βa+c1exp(βb)exp[α(βd)], (4.1)

    where a, b, c, d are non-negative constants. If the hyper parameters a,b,c, and d are chosen to be equal zero, then the informative priors are reduced to the non-informative priors.

    Upon combining (2.6) and (4.1), given the generalized PHCS, the posterior density function of α,β is obtained as

    π(α,β|x_)=L(α,β|x_)π(α,β)/00L(α,β|x_)π(α,β)dαdβ=I1αD+a1βD+a+c1exp(βb)(Di=1xβ1i1+xβi)exp{α[W(β|x_)+βd]}, (4.2)

    where

    I=00αD+a1βD+a+c1exp(βb)(Di=1xβ1i1+xβi)exp{α[W(β|x_)+βd]}dαdβ=Γ(D+a)0βD+a+c1exp(βb)(Di=1xβ1i1+xβi)[W(β|x_)+βd](D+a)dβ. (4.3)

    Hence, from (1.4), the Bayesian estimates of α and β under the squared error loss function are obtained, respectively, as

    ˆαBS=I1Γ(D+a+1)0βD+a+c1(Di=1xβ1i1+xβi)×exp(βb)[W(β;x_)+βd](D+a+1)dβ, (4.4)
    ˆβBS=I1Γ(D+a)0βD+a+c(Di=1xβ1i1+xβi)×exp(βb)[W(β;x_)+βd](D+a)dβ. (4.5)

    From (1.6), the Bayesian estimator of α and β under the LINEX loss function are obtained, respectively, as

    ˆαBL=1υln{I1Γ(D+a)0βD+a+c1(Di=1xβ1i1+xβi)×exp(βb)[W(β;x_)+υ+βd](D+a)dβ}, (4.6)
    ˆβBL=1υln{I1Γ(D+a)0βD+a+c1(Di=1xβ1i1+xβi)×exp(β(b+υ))[W(β;x_)+βd](D+a)dβ}. (4.7)

    From (1.8), the Bayesian estimator of α and β under the GE loss function are obtained, respectively, as

    ˆαBE={I1Γ(D+aκ)0βD+a+c1(Di=1xβ1i1+xβi)×exp(βb)[W(β;x_)+βd](D+aκ)dβ}1κ, (4.8)
    ˆβBE={I1Γ(D+a)0βD+a+cκ1(Di=1xβ1i1+xβi)×exp(βb)[W(β;x_)+βd](D+a)dβ}1κ. (4.9)

    Since the integrals in (4.4), (4.5), (4.6), (4.7), (4.8) and (4.9) can't be computed analytically, so the Markov chain Monte Carlo (MCMC) method are used for evaluating these integrals. Now, depending on the posterior distribution in (4.2), the conditional posterior distributions π1(α|β;x_) and π2(β|α;x_) of the parameters α and β can be computed and written, respectively, as

    π1(α|β;x_)=[W(β;x_)+βd](D+a)Γ(D+a)αD+a1exp{α[W(β;x_)+βd]}=GammaDistribution[D+a,(W(β;x_)+βd)], (4.10)

    and

    π2(β|α;x_)=βD+a+c1exp(βb)(Di=1xβ1i1+xβi)exp{α[W(β;x_)+βd]}. (4.11)

    Since, the conditional distribution of β in (4.11) is not a well-known distribution, the Metropolis-Hastings sampler is used to generate samples of β inside the MCMC algorithm; see Metropolis et al. [24]. MCMC algorithm (1) is used to generate samples of α and β from the conditional posterior distributions which will used for approximating the Bayes estimates of them.

    Algorithm 1 MCMC method.
    Step1 start with α(0)=ˆαML and β(0)=ˆβML
    Step2 set i=1
    Step3 Generate α(i)GammaDistribution[D+a,(W(β(i1);x_)+β(i1)d)]=π1(α|β(i1);x_)
    Step4 Generate a proposal β() from N(β(i1),V(β))
    Step5 Calculate the acceptance probabilities dβ=min[1,π2(β()|α(i1))π1(β(i1)|α(i1))]
    Step6 Generate u1 follow a Uniform(0,1) distribution, if u1dβ, set β(i)=β(), else set β(i)=β(i1)
    Step7 set i=i+1, repeat steps 3 to 7, N times and obtain (α(j),β(j)), j=1,2,...,N.
    Step8 Remove the first B values for α and β, which is the burn-in period of α(j) and β(j) where j=1,2,...,NB.

     | Show Table
    DownLoad: CSV

    Assume g(α,β) is any function in α and β, then the Bayesian estimates of g using the MCMC values, are obtained as follows:

    based on SEL, the Bayesian estimate of g is given by,

    ^g(α,β)BS=1NBNBi=1g(α(i),β(i)), (4.12)

    Based on the LINEX loss function,

    ^g(α,β)BL=1υLn[1NBNBi=1eυg(α(i),β(i))], (4.13)

    For the GEL function, the Bayesian estimate is given by,

    ^g(α,β)BE=[1NBNBi=1[g(α(i),β(i))]κ]1/κ, (4.14)

    The 100(1γ)% Bayesian confidence intervals or credible intervals (L,U) for parameter θ (θ is α or β) if

    ULπ(θ|x_)dθ=1γ, (4.15)

    Since the integration in (4.15) can't be solved analytically, so the 100(1γ) MCMC approximate credible intervals for α and β using the (NB) generated values after sorting it in an ascending order, (α(1),α(2),...,α(NB)) and (β(1),β(2),...,β(NB)), are given as follows,

    (α[((NB)γ/2)],α[((NB)(1γ/2))])(β[((NB)γ/2)],β[((NB)(1γ/2))]) (4.16)

    and the lengths of the credible intervals are the absolute difference between the lower and the upper bounds.

    For ρ=1,2,...,Rj, let Xρ:Rj denote the ρth order statistic out of Rj removed units at stage j. Then, the conditional density function of Xρ:Rj, given the observed generalized PHCS, is given, see Basak et al.[25], by

    f(Xρ:Rj|x_)=f(x|x_)=Rj!(ρ1)!(Rjρ)![F(x)F(xj)]ρ1[1F(x)]Rjρf(x)[1F(xj)]Rj,  x>xj, (5.1)

    where

    j={1,...,kifT<Xk:m:n<Xm:m:n,1,...,D,τifXk:m:n<T<Xm:m:n,1,...,mifXk:m:n<Xm:m:n<T,

    with xτ=T.

    By using (1.2) and (1.1) in (5.1), given generalized PHCS, the conditional density function of Xρ:Rj is then given as follows:

    f(x|x_)=ρ1q=0Cqαβxβ11+xβexp{α[ϖqln(1+xβ1+xβj)]},  x>xj, (5.2)

    where Cq=(1)q(ρ1q)Rj!(ρ1)!(Rjρ)! and ϖq=q+Rjρ+1 for q=0,...,ρ1.

    Upon combining (4.2), (5.2) and using the MCMC technique, then the Bayesian predictive density function of Xρ:Rj, given generalized PHCS, is obtained as

    f(x|x_)=00f(x|x_)π(α,β|x_)dαdβ=1NBNBi=1ρ1q=0Cqα(i)β(i)xβ(i)11+xβ(i)exp{α(i)[ϖqln(1+xβ(i)1+xβ(i)j)]}. (5.3)

    The Bayesian predictive survival function of Xρ:Rj, given generalized PHCS, is given as

    ˉF(t|x_)=tf(x|x_)dx=1NBNBi=1ρ1q=0Cqϖq(1+tβ(i)1+xβ(i)j)α(i)ϖq. (5.4)

    The Bayesian point predictor of Xρ:Rj under the squared error loss function is the mean of the predictive density, given by

    ˆXρ:Rj=0xf(x|x_)dx,=1NBNBi=1ρ1q=0Cqα(i)(1+xβ(i)j)α(i)ϖqΓ(1+1β(i))Γ(α(i)ϖq1β(i))Γ(1+α(i)ϖq), (5.5)

    where f(x|x_) is given as in (5.3). The Bayesian predictive bounds of 100(1γ)% two-sided equi-tailed (ET) interval for Xρ:Rj can be obtained by solving the following two equations:

    ˉF(LET|x_)=γ2andˉF(UET|x_)=1γ2, (5.6)

    where ˉF(t|x_) is given as in (5.4), and LET and UET denote the lower and upper bounds, respectively. On the other hand, for the highest posterior density (HPD) method, the following two equations need to be solved:

    ˉF(LHPD|x_)ˉF(UHPD|x_)=1γ,

    and

    f(LHPD|x_)f(UHPD|x_)=0,

    where f(x|x_) is as in (5.3), and LHPD and UHPD denote the HPD lower and upper bounds, respectively.

    Let Y1::NY2::NY::N be a future independent progressive Type-Ⅱ censored sample from the same population with censoring scheme S=(S1,...,S). In this section, we develop a general procedure for deriving the point and interval predictions for Ys::N, 1s, based on the observed generalized PHCS. The marginal density function of Ys::N is given by Balakrishnan et al. [26] as

    fYs::N(ys|α)=c(N,s)s1q=0cq,s1[1F(ys)]Mq,s1f(ys), (6.1)

    where 1sρ, c(N,s)=N(NS11)...(NS1...Ss1+1),Mq,s=NS1...Ssq1s+q+1,and cq,s1=(1)q{[qu=1sq+u1υ=sq(Sυ+1)][sq1u=1sq1υ=u(Sυ+1)]}1.

    Upon substituting (1.2) and (1.1) in (6.1), the marginal density function of Ys::N is then obtained as

    fYs::N(ys|α)=c(N,s)s1q=0cq,s1αβyβ1s1+yβsexp{α[Mq,sln(1+yβs)]},  ys>0. (6.2)

    Upon combining (4.2), (6.2) and using the MCMC method, given generalized PHCS, the Bayesian predictive density function of Ys::N is obtained as

    fYs::N(ys|x_)=00fYs::N(ys|x_)π(α,β|x_)dαdβ=c(N,s)NBNBi=1s1q=0cq,s1α(i)β(i)yβ(i)1s1+yβ(i)sexp{α(i)[Mq,sln(1+yβ(i)s)]}. (6.3)

    From (6.3), we simply obtain the predictive survival function of Ys::N, given generalized PHCS, as

    ˉF(t|x_)=tfYs::N(ys|x_)dys=c(N,s)NBNBi=1s1q=0cq,s1Mq,s(1+tβ(i))α(i)Mq,s. (6.4)

    The Bayesian point predictor of Ys::N, 1sm, under the squared error loss function is the mean of the predictive density, given by

    ˆYs::N=0ysfYs::N(ys|x_)dys, (6.5)

    where fYs::N(ys|x_) is given as in (6.3).

    The Bayesian predictive bounds of 100(1γ)% ET interval for Ys::N, 1sm, can be obtained by solving the following two equations:

    ˉFYs::N(LET|x_)=γ2andˉFYs::N(UET|x_)=1γ2, (6.6)

    where ˉFYs::N(t|x_) is given as in (6.4), and LET and UET denote the lower and upper bounds, respectively. For the HPD method, the following two equations need to be solved:

    ˉFYs::N(LHPD|x_)ˉFYs::N(UHPD|x_)=1γ,

    and

    fYs::N(LHPD|x_)fYs::N(UHPD|x_)=0,

    where fYs::N(ys|x_) is as in (6.3), and LHPD and UHPD denote the HPD lower and upper bounds, respectively.

    Before progressing further, first we describe how we generate generalized PHCS data for a given set n, m, k, R1,R2,...,Rm and T. We use the transformation suggested in Balakrishnan and Aggarwala [1].

    Thus, the generalized PHCS data can be easily generated as follows. If T<Xk:m:n<Xm:m:n, then we have Case Ⅰ and by using the transformation suggested in Ng et al. [27] and then the corresponding generalized PHCS is (X1:m:n,...,XD:m:n,XD+1:m:n...,Xk:m:n). If Xk:m:n<T<Xm:m:n, then we have Case Ⅱ and we find D such that XD:m:n<T<XD+1:m:n. The corresponding generalized PHCS is (X1:m:n,...,XD:m:n). If Xk:m:n<Xm:m:n<T, then we have Case Ⅲ and the corresponding generalized PHCS is (X1:m:n,...,Xm:m:n).

    In this section, Monte Carlo simulation study is carried out to compare Performance under various ML and Bayesian estimates Schemes for sampling. Different values for n,m,k and T are used for generating 2000 generalized PHCS from the Burr Type-Ⅻ distribution (with α=2 and β=1). The values of T are chosen to ensure that the three cases of generalized PHCS are represented. So, T1 is chosen to be in the first quarter of data, while T2 is chosen around the mean. Finally, T3 is chosen to be within the third quarter. They chosen to be (T1,T2,T3)=(0.5,1.5,2.5). For the purpose of comparison, we computed the ML estimate and Bayesian estimates of α and β under the SE, LINEX (with υ = 0.5) and GE (with κ = 0.5) loss functions using informative priors (IP) and non-informative prior (NIP). Also, we have computed the mean square error (MSE) and the estimated expected bias (EB) for each estimate.

    Different samples of size (n) are used for performing the simulation study with different effective samples sizes (m,k). While, the process of removing the survival units are executed using these censoring schemes

    1. cheme 1: Ri=2(nm)m if i is odd and Ri=0 if i is even.

    2. Scheme 2: Ri=2(nm)m if i is even and Ri=0 if i is odd.

    3. Scheme 3: Ri=0 for i=1,2,...,D, Ri=nD for i=D.

    All these cases adopted a cording to the case of generalized progressive censoring and all Bayesian results are computed based on two different choices of the hyper parameters (a,b,c,d), namely,

    1. Informative prior (IP): a=80, b=20, c=20 and d=40 (by letting the mean of the marginal prior distribution of α is 2 and its variance is 0.05, and the mean of the marginal prior distribution of β is 1 and its variance is 0.05).

    2. Non-informative prior (NIP): a=b=c=d=0.

    The 90% and 95% asymptotic confidence intervals and Bayesian credible intervals for ˆα, ˆβ, ^R(t), and ^h(t) are constructed and its estimated average length (AL) is computed, also, the estimated coverage probabilities (CP) for these intervals were computed as the number of intervals that covered the true values divided by 2000. The credible intervals are obtained under informative and non-informative priors.

    Tables 14 are present the values of MSE and EB of the ML and Bayesian estimates for α, β, S(x), and h(t) respectively based on different values of T under three different censoring schemes. While, Tables 58 are present the AL of 90% and 95% confidence intervals and corresponding CP for ˆα, ˆβ, ^R(t), and ^h(t), respectively.

    Table 1.  MSE and EB of the ML and Bayesian estimates for α based on the different censoring schemes.
    Bayesian
    ˆαBS ˆαBL ˆαBE
    T (n,m,k) Sch. ˆαML IP NIP IP NIP IP NIP
    MSE
    0.5 (30, 20, 15) 1 0.7260 0.0376 0.8161 0.0367 0.5049 0.0374 0.5752
    (50, 20, 15) 1 0.7409 0.0295 0.8759 0.0288 0.5410 0.0292 0.6194
    (60, 30, 20) 1 0.5895 0.0266 0.6700 0.0261 0.4116 0.0263 0.5236
    (30, 20, 15) 2 0.7380 0.0387 0.8129 0.0377 0.5363 0.0384 0.5952
    (50, 20, 15) 2 0.7841 0.0303 0.8880 0.0298 0.5606 0.0303 0.6398
    (60, 30, 20) 2 0.4400 0.0268 0.4618 0.0265 0.3516 0.0268 0.3783
    (30, 20, 15) 3 0.7380 0.0387 0.8129 0.0377 0.5363 0.0384 0.5952
    (50, 20, 15) 3 2.0293 0.0266 3.5509 0.0260 1.0981 0.0263 1.7344
    (60, 30, 20) 3 0.4880 0.0249 0.5619 0.0245 0.3979 0.0248 0.4265
    1.5 (30, 20, 15) 1 0.4113 0.0398 0.4101 0.0387 0.3252 0.0393 0.3451
    (50, 20, 15) 1 0.9143 0.0313 0.9600 0.0306 0.5785 0.0310 0.6852
    (60, 30, 20) 1 0.3014 0.0265 0.3088 0.0261 0.2532 0.0264 0.2628
    (30, 20, 15) 2 0.4984 0.0392 0.5099 0.0380 0.3805 0.0384 0.4113
    (50, 20, 15) 2 0.7459 0.0294 0.8166 0.0288 0.5127 0.0292 0.5813
    (60, 30, 20) 2 0.3241 0.0252 0.3285 0.0246 0.2667 0.0249 0.2760
    (30, 20, 15) 3 0.7213 0.0378 0.8757 0.0366 0.5061 0.0370 0.6300
    (50, 20, 15) 3 1.7815 0.0281 2.6421 0.0276 0.9627 0.0279 1.4524
    (60, 30, 20) 3 0.5098 0.0234 0.5700 0.0229 0.3964 0.0232 0.4295
    2.5 (30, 20, 15) 1 0.4062 0.0384 0.4109 0.0372 0.3207 0.0377 0.3382
    (50, 20, 15) 1 0.8415 0.0348 0.9189 0.0341 0.5733 0.0346 0.6804
    (60, 30, 20) 1 0.3611 0.0269 0.3754 0.0263 0.2956 0.0265 0.3132
    (30, 20, 15) 2 0.4813 0.0407 0.4905 0.0394 0.3771 0.0399 0.3975
    (50, 20, 15) 2 0.8843 0.0315 1.0270 0.0305 0.6038 0.0307 0.7125
    (60, 30, 20) 2 0.3043 0.0273 0.3211 0.0266 0.2550 0.0268 0.2673
    (30, 20, 15) 3 0.6830 0.0378 0.8127 0.0366 0.5044 0.0371 0.5854
    (50, 20, 15) 3 2.1179 0.0280 3.1983 0.0275 1.0682 0.0278 1.6650
    (60, 30, 20) 3 0.5474 0.0252 0.6221 0.0248 0.4210 0.0251 0.4634
    EB
    0.5 (30, 20, 15) 1 0.2803 0.0055 0.3050 0.0106 0.1576 0.0186 0.1316
    (50, 20, 15) 1 0.2594 0.0080 0.3000 0.0055 0.1385 0.0122 0.1100
    (60, 30, 20) 1 0.1742 0.0088 0.1950 0.0029 0.0998 0.0087 0.0773
    (30, 20, 15) 2 0.2887 0.0053 0.3100 0.0108 0.1646 0.0188 0.1375
    (50, 20, 15) 2 0.2532 0.0009 0.2830 0.0122 0.1273 0.0188 0.0998
    (60, 30, 20) 2 0.1582 0.0003 0.1720 0.0111 0.0834 0.0168 0.0583
    (30, 20, 15) 3 0.2754 0.0053 0.3100 0.0108 0.1646 0.0188 0.1375
    (50, 20, 15) 3 0.5515 0.0049 0.7280 0.0074 0.3695 0.0136 0.4064
    (60, 30, 20) 3 0.2281 0.0034 0.2630 0.0071 0.1701 0.0124 0.1541
    1.5 (30, 20, 15) 1 0.1591 0.0116 0.1440 0.0044 0.0670 0.0123 0.0422
    (50, 20, 15) 1 0.3156 0.0063 0.3150 0.0071 0.1819 0.0137 0.1670
    (60, 30, 20) 1 0.1406 0.0029 0.1360 0.0083 0.0798 0.0139 0.0620
    (30, 20, 15) 2 0.2166 0.0168 0.2050 0.0010 0.1208 0.0068 0.0993
    (50, 20, 15) 2 0.2976 0.0056 0.3120 0.0075 0.1818 0.0140 0.1642
    (60, 30, 20) 2 0.1721 0.0079 0.1630 0.0033 0.1052 0.0088 0.0881
    (30, 20, 15) 3 0.2681 0.0165 0.2840 0.0013 0.1738 0.0061 0.1619
    (50, 20, 15) 3 0.4818 0.0022 0.6330 0.0101 0.3298 0.0163 0.3489
    (60, 30, 20) 3 0.2300 0.0060 0.2620 0.0046 0.1701 0.0098 0.1552
    2.5 (30, 20, 15) 1 0.1914 0.0141 0.1770 0.0018 0.1018 0.0097 0.0794
    (50, 20, 15) 1 0.2786 0.0015 0.2830 0.0118 0.1596 0.0183 0.1438
    (60, 30, 20) 1 0.1764 0.0080 0.1730 0.0033 0.1150 0.0089 0.0995
    (30, 20, 15) 2 0.2281 0.0159 0.2170 0.0001 0.1368 0.0079 0.1165
    (50, 20, 15) 2 0.3657 0.0178 0.3810 0.0046 0.2358 0.0018 0.2236
    (60, 30, 20) 2 0.1573 0.0137 0.1540 0.0025 0.0978 0.0030 0.0814
    (30, 20, 15) 3 0.2831 0.0164 0.3000 0.0011 0.1882 0.0063 0.1754
    (50, 20, 15) 3 0.5227 0.0027 0.6720 0.0096 0.3500 0.0158 0.3758
    (60, 30, 20) 3 0.2453 0.0001 0.2750 0.0105 0.1795 0.0158 0.1656

     | Show Table
    DownLoad: CSV
    Table 2.  MSE and EB of the ML and Bayesian estimates for β based on the different censoring schemes.
    Bayesian
    ˆβBS ˆβBL ˆβBE
    T (n,m,k) Sch. ˆβML IP NIP IP NIP IP NIP
    MSE
    0.5 (30, 20, 15) 1 0.0546 0.0083 0.0556 0.0081 0.0519 0.0080 0.0492
    (50, 20, 15) 1 0.0409 0.0064 0.0415 0.0064 0.0396 0.0063 0.0385
    (60, 30, 20) 1 0.0315 0.0058 0.0329 0.0057 0.0318 0.0057 0.0312
    (30, 20, 15) 2 0.0612 0.0086 0.0629 0.0084 0.0588 0.0082 0.0560
    (50, 20, 15) 2 0.0413 0.0068 0.0418 0.0067 0.0399 0.0066 0.0389
    (60, 30, 20) 2 0.0286 0.0057 0.0293 0.0056 0.0283 0.0055 0.0277
    (30, 20, 15) 3 0.0612 0.0086 0.0629 0.0084 0.0588 0.0082 0.0560
    (50, 20, 15) 3 0.0616 0.0058 0.0650 0.0058 0.0604 0.0057 0.0569
    (60, 30, 20) 3 0.0330 0.0054 0.0347 0.0053 0.0332 0.0052 0.0319
    1.5 (30, 20, 15) 1 0.0383 0.0087 0.0378 0.0085 0.0362 0.0083 0.0353
    (50, 20, 15) 1 0.0405 0.0067 0.0399 0.0066 0.0381 0.0064 0.0368
    (60, 30, 20) 1 0.0249 0.0059 0.0250 0.0058 0.0243 0.0057 0.0238
    (30, 20, 15) 2 0.0382 0.0083 0.0382 0.0082 0.0364 0.0080 0.0350
    (50, 20, 15) 2 0.0341 0.0065 0.0339 0.0064 0.0323 0.0063 0.0310
    (60, 30, 20) 2 0.0235 0.0052 0.0236 0.0051 0.0228 0.0051 0.0222
    (30, 20, 15) 3 0.0466 0.0078 0.0467 0.0077 0.0440 0.0076 0.0419
    (50, 20, 15) 3 0.0570 0.0064 0.0606 0.0063 0.0567 0.0061 0.0536
    (60, 30, 20) 3 0.0309 0.0051 0.0320 0.0050 0.0306 0.0050 0.0295
    2.5 (30, 20, 15) 1 0.0401 0.0086 0.0396 0.0084 0.0378 0.0083 0.0366
    (50, 20, 15) 1 0.0398 0.0075 0.0398 0.0074 0.0381 0.0073 0.0369
    (60, 30, 20) 1 0.0246 0.0059 0.0248 0.0058 0.0240 0.0057 0.0234
    (30, 20, 15) 2 0.0410 0.0086 0.0412 0.0084 0.0393 0.0083 0.0378
    (50, 20, 15) 2 0.0372 0.0066 0.0374 0.0066 0.0356 0.0065 0.0340
    (60, 30, 20) 2 0.0214 0.0057 0.0216 0.0056 0.0209 0.0055 0.0205
    (30, 20, 15) 3 0.0534 0.0084 0.0544 0.0082 0.0512 0.0081 0.0489
    (50, 20, 15) 3 0.0625 0.0063 0.0646 0.0062 0.0603 0.0061 0.0571
    (60, 30, 20) 3 0.0338 0.0056 0.0345 0.0055 0.0329 0.0054 0.0315
    EB
    0.5 (30, 20, 15) 1 0.0741 0.0134 0.0697 0.0104 0.0582 0.0048 0.0380
    (50, 20, 15) 1 0.0451 0.0071 0.0404 0.0049 0.0314 0.0007 0.0146
    (60, 30, 20) 1 0.0331 0.0059 0.0298 0.0040 0.0231 0.0004 0.0106
    (30, 20, 15) 2 0.0789 0.0130 0.0738 0.0101 0.0620 0.0045 0.0418
    (50, 20, 15) 2 0.0466 0.0101 0.0397 0.0080 0.0308 0.0038 0.0143
    (60, 30, 20) 2 0.0349 0.0091 0.0307 0.0073 0.0242 0.0038 0.0119
    (30, 20, 15) 3 0.0789 0.0130 0.0738 0.0101 0.0620 0.0045 0.0418
    (50, 20, 15) 3 0.0935 0.0091 0.0935 0.0071 0.0822 0.0032 0.0635
    (60, 30, 20) 3 0.0538 0.0082 0.0543 0.0064 0.0477 0.0030 0.0358
    1.5 (30, 20, 15) 1 0.0446 0.0105 0.0379 0.0076 0.0298 0.0019 0.0148
    (50, 20, 15) 1 0.0594 0.0096 0.0512 0.0074 0.0438 0.0031 0.0304
    (60, 30, 20) 1 0.0359 0.0080 0.0323 0.0062 0.0274 0.0026 0.0182
    (30, 20, 15) 2 0.0584 0.0103 0.0535 0.0074 0.0451 0.0019 0.0298
    (50, 20, 15) 2 0.0568 0.0091 0.0521 0.0070 0.0447 0.0029 0.0312
    (60, 30, 20) 2 0.0407 0.0070 0.0344 0.0052 0.0295 0.0017 0.0204
    (30, 20, 15) 3 0.0688 0.0088 0.0649 0.0060 0.0554 0.0006 0.0388
    (50, 20, 15) 3 0.0875 0.0105 0.0894 0.0085 0.0785 0.0046 0.0601
    (60, 30, 20) 3 0.0517 0.0067 0.0511 0.0050 0.0447 0.0016 0.0331
    2.5 (30, 20, 15) 1 0.0568 0.0113 0.0498 0.0084 0.0420 0.0027 0.0278
    (50, 20, 15) 1 0.0585 0.0115 0.0519 0.0093 0.0447 0.0051 0.0316
    (60, 30, 20) 1 0.0425 0.0079 0.0400 0.0060 0.0352 0.0025 0.0263
    (30, 20, 15) 2 0.0631 0.0108 0.0582 0.0078 0.0503 0.0022 0.0360
    (50, 20, 15) 2 0.0659 0.0056 0.0590 0.0035 0.0516 0.0006 0.0381
    (60, 30, 20) 2 0.0318 0.0039 0.0283 0.0021 0.0236 0.0014 0.0146
    (30, 20, 15) 3 0.0748 0.0091 0.0714 0.0063 0.0619 0.0009 0.0456
    (50, 20, 15) 3 0.0913 0.0096 0.0915 0.0076 0.0806 0.0037 0.0622
    (60, 30, 20) 3 0.0592 0.0103 0.0577 0.0085 0.0511 0.0051 0.0394

     | Show Table
    DownLoad: CSV
    Table 3.  MSE and EB of the ML and Bayesian estimates for R(t) at the different censoring schemes.
    Bayesian
    ^R(t)BS ^R(t)BL ^R(t)BE
    T (n,m,k) Sch. ^R(t)ML IP NIP IP NIP IP NIP
    MSE
    0.5 (30, 20, 15) 1 4.0E-04 3.0E-06 1.2E-03 3.0E-06 1.1E-03 2.0E-06 2.0E-04
    (50, 20, 15) 1 7.0E-04 3.0E-06 2.0E-03 3.0E-06 1.9E-03 1.0E-06 3.0E-04
    (60, 30, 20) 1 5.0E-04 3.0E-06 1.2E-03 3.0E-06 1.1E-03 2.0E-06 3.0E-04
    (30, 20, 15) 2 5.0E-04 3.0E-06 1.4E-03 3.0E-06 1.3E-03 2.0E-06 3.0E-04
    (50, 20, 15) 2 9.0E-04 3.0E-06 2.2E-03 3.0E-06 2.1E-03 2.0E-06 4.0E-04
    (60, 30, 20) 2 6.0E-04 3.0E-06 1.3E-03 3.0E-06 1.2E-03 2.0E-06 4.0E-04
    (30, 20, 15) 3 5.0E-04 3.0E-06 1.4E-03 3.0E-06 1.3E-03 2.0E-06 3.0E-04
    (50, 20, 15) 3 5.0E-04 2.0E-06 1.3E-03 2.0E-06 1.2E-03 1.0E-06 3.0E-04
    (60, 30, 20) 3 3.0E-04 3.0E-06 6.0E-04 3.0E-06 6.0E-04 2.0E-06 2.0E-04
    1.5 (30, 20, 15) 1 4.0E-04 4.0E-06 9.0E-04 4.0E-06 8.0E-04 2.0E-06 2.0E-04
    (50, 20, 15) 1 5.0E-04 3.0E-06 1.1E-03 3.0E-06 1.1E-03 2.0E-06 3.0E-04
    (60, 30, 20) 1 2.0E-04 4.0E-06 5.0E-04 4.0E-06 5.0E-04 2.0E-06 2.0E-04
    (30, 20, 15) 2 4.0E-04 4.0E-06 8.0E-04 4.0E-06 8.0E-04 2.0E-06 3.0E-04
    (50, 20, 15) 2 4.0E-04 3.0E-06 9.0E-04 3.0E-06 9.0E-04 2.0E-06 2.0E-04
    (60, 30, 20) 2 2.0E-04 4.0E-06 4.0E-04 4.0E-06 4.0E-04 2.0E-06 1.0E-04
    (30, 20, 15) 3 2.0E-04 3.0E-06 5.0E-04 3.0E-06 5.0E-04 2.0E-06 2.0E-04
    (50, 20, 15) 3 5.0E-04 3.0E-06 1.3E-03 3.0E-06 1.2E-03 1.0E-06 3.0E-04
    (60, 30, 20) 3 2.0E-04 3.0E-06 5.0E-04 3.0E-06 5.0E-04 2.0E-06 1.0E-04
    2.5 (30, 20, 15) 1 2.0E-04 3.0E-06 6.0E-04 3.0E-06 6.0E-04 2.0E-06 2.0E-04
    (50, 20, 15) 1 4.0E-04 4.0E-06 9.0E-04 4.0E-06 9.0E-04 2.0E-06 3.0E-04
    (60, 30, 20) 1 2.0E-04 4.0E-06 4.0E-04 4.0E-06 4.0E-04 2.0E-06 1.0E-04
    (30, 20, 15) 2 3.0E-04 4.0E-06 6.0E-04 4.0E-06 6.0E-04 2.0E-06 2.0E-04
    (50, 20, 15) 2 3.0E-04 3.0E-06 7.0E-04 3.0E-06 7.0E-04 2.0E-06 2.0E-04
    (60, 30, 20) 2 2.0E-04 4.0E-06 3.0E-04 4.0E-06 3.0E-04 2.0E-06 1.0E-04
    (30, 20, 15) 3 2.0E-04 3.0E-06 5.0E-04 3.0E-06 5.0E-04 2.0E-06 1.0E-04
    (50, 20, 15) 3 6.0E-04 3.0E-06 1.4E-03 3.0E-06 1.3E-03 1.0E-06 3.0E-04
    (60, 30, 20) 3 2.0E-04 3.0E-06 5.0E-04 3.0E-06 5.0E-04 2.0E-06 1.0E-04
    EB
    0.5 (30, 20, 15) 1 5.2E-03 1.1E-03 2.1E-02 1.1E-03 2.0E-02 5.0E-04 1.1E-03
    (50, 20, 15) 1 8.8E-03 1.1E-03 2.7E-02 1.1E-03 2.6E-02 5.0E-04 1.0E-05
    (60, 30, 20) 1 6.7E-03 1.0E-03 1.9E-02 1.0E-03 1.9E-02 5.0E-04 3.0E-04
    (30, 20, 15) 2 6.1E-03 1.1E-03 2.2E-02 1.1E-03 2.2E-02 5.0E-04 2.0E-04
    (50, 20, 15) 2 9.9E-03 1.1E-03 2.8E-02 1.1E-03 2.7E-02 5.0E-04 8.0E-04
    (60, 30, 20) 2 7.1E-03 1.1E-03 1.9E-02 1.1E-03 1.9E-02 5.0E-04 8.0E-04
    (30, 20, 15) 3 6.1E-03 1.1E-03 2.2E-02 1.1E-03 2.2E-02 5.0E-04 2.0E-04
    (50, 20, 15) 3 5.5E-03 1.0E-03 2.0E-02 1.0E-03 2.0E-02 6.0E-04 1.9E-03
    (60, 30, 20) 3 4.1E-03 1.0E-03 1.3E-02 1.0E-03 1.3E-02 5.0E-04 8.0E-04
    1.5 (30, 20, 15) 1 5.1E-03 1.1E-03 1.7E-02 1.0E-03 1.6E-02 5.0E-04 7.0E-04
    (50, 20, 15) 1 5.4E-03 1.0E-03 1.8E-02 1.0E-03 1.8E-02 5.0E-04 5.0E-04
    (60, 30, 20) 1 3.6E-03 1.1E-03 1.1E-02 1.0E-03 1.1E-02 4.0E-04 2.0E-04
    (30, 20, 15) 2 4.1E-03 9.0E-04 1.5E-02 9.0E-04 1.5E-02 6.0E-04 2.0E-05
    (50, 20, 15) 2 4.6E-03 1.0E-03 1.6E-02 1.0E-03 1.6E-02 5.0E-04 6.0E-04
    (60, 30, 20) 2 2.9E-03 1.0E-03 1.1E-02 1.0E-03 1.0E-02 5.0E-04 2.0E-04
    (30, 20, 15) 3 2.6E-03 1.0E-03 1.3E-02 9.0E-04 1.3E-02 6.0E-04 1.5E-03
    (50, 20, 15) 3 5.2E-03 1.0E-03 2.0E-02 1.0E-03 2.0E-02 6.0E-04 2.1E-03
    (60, 30, 20) 3 3.0E-03 1.0E-03 1.2E-02 1.0E-03 1.2E-02 5.0E-04 1.4E-03
    2.5 (30, 20, 15) 1 3.3E-03 9.0E-04 1.3E-02 9.0E-04 1.3E-02 6.0E-04 3.0E-04
    (50, 20, 15) 1 5.1E-03 1.1E-03 1.7E-02 1.1E-03 1.7E-02 5.0E-04 1.0E-04
    (60, 30, 20) 1 2.4E-03 9.0E-04 9.4E-03 9.0E-04 9.3E-03 5.0E-04 6.0E-04
    (30, 20, 15) 2 3.3E-03 1.0E-03 1.3E-02 9.0E-04 1.3E-02 6.0E-04 2.0E-04
    (50, 20, 15) 2 2.8E-03 9.0E-04 1.4E-02 9.0E-04 1.4E-02 6.0E-04 1.6E-03
    (60, 30, 20) 2 2.5E-03 1.0E-03 9.7E-03 1.0E-03 9.6E-03 5.0E-04 6.0E-04
    (30, 20, 15) 3 2.9E-03 9.0E-04 1.3E-02 9.0E-04 1.3E-02 6.0E-04 1.3E-03
    (50, 20, 15) 3 5.9E-03 1.1E-03 2.1E-02 1.0E-03 2.0E-02 5.0E-04 1.5E-03
    (60, 30, 20) 3 3.1E-03 1.0E-03 1.2E-02 1.0E-03 1.2E-02 5.0E-04 1.4E-03

     | Show Table
    DownLoad: CSV
    Table 4.  MSE and EB of the ML and Bayesian estimates for h(t) at different censoring schemes.
    Bayesian
    ^h(t)BS ^h(t)BL ^h(t)BE
    T (n,m,k) Sch. ^h(t)ML IP NIP IP NIP IP NIP
    MSE
    0.5 (30, 20, 15) 1 2.3E-02 3.0E-05 3.0E-02 3.0E-05 2.5E-02 3.0E-05 1.6E-02
    (50, 20, 15) 1 2.0E-02 3.0E-05 2.7E-02 3.0E-05 2.3E-02 3.0E-05 1.5E-02
    (60, 30, 20) 1 2.2E-02 4.0E-05 2.9E-02 4.0E-05 2.1E-02 4.0E-05 1.8E-02
    (30, 20, 15) 2 2.4E-02 3.0E-05 3.0E-02 3.0E-05 2.6E-02 4.0E-05 1.7E-02
    (50, 20, 15) 2 2.0E-02 3.0E-05 2.6E-02 3.0E-05 2.3E-02 3.0E-05 1.5E-02
    (60, 30, 20) 2 1.1E-02 4.0E-05 1.3E-02 4.0E-05 1.2E-02 4.0E-05 8.7E-03
    (30, 20, 15) 3 2.4E-02 3.0E-05 3.0E-02 3.0E-05 2.6E-02 4.0E-05 1.7E-02
    (50, 20, 15) 3 6.8E-02 3.0E-05 1.7E-01 3.0E-05 8.6E-02 3.0E-05 5.2E-02
    (60, 30, 20) 3 1.4E-02 4.0E-05 1.8E-02 4.0E-05 1.6E-02 4.0E-05 1.1E-02
    1.5 (30, 20, 15) 1 9.8E-03 4.0E-05 1.0E-02 4.0E-05 9.7E-03 5.0E-05 7.3E-03
    (50, 20, 15) 1 2.6E-02 4.0E-05 3.1E-02 4.0E-05 2.6E-02 4.0E-05 1.7E-02
    (60, 30, 20) 1 7.7E-03 6.0E-05 8.4E-03 6.0E-05 8.0E-03 6.0E-05 6.2E-03
    (30, 20, 15) 2 1.2E-02 4.0E-05 1.4E-02 4.0E-05 1.3E-02 5.0E-05 9.1E-03
    (50, 20, 15) 2 1.8E-02 4.0E-05 2.2E-02 4.0E-05 1.9E-02 4.0E-05 1.2E-02
    (60, 30, 20) 2 7.8E-03 5.0E-05 8.5E-03 5.0E-05 8.1E-03 6.0E-05 6.2E-03
    (30, 20, 15) 3 2.4E-02 4.0E-05 3.3E-02 4.0E-05 2.6E-02 4.0E-05 1.8E-02
    (50, 20, 15) 3 6.0E-02 3.0E-05 1.1E-01 3.0E-05 7.0E-02 3.0E-05 4.4E-02
    (60, 30, 20) 3 1.4E-02 4.0E-05 1.8E-02 4.0E-05 1.6E-02 4.0E-05 1.1E-02
    2.5 (30, 20, 15) 1 1.1E-02 5.0E-05 1.2E-02 5.0E-05 1.1E-02 5.0E-05 8.2E-03
    (50, 20, 15) 1 2.2E-02 4.0E-05 2.7E-02 4.0E-05 2.3E-02 4.0E-05 1.6E-02
    (60, 30, 20) 1 9.1E-03 5.0E-05 1.0E-02 6.0E-05 9.5E-03 6.0E-05 7.3E-03
    (30, 20, 15) 2 1.2E-02 5.0E-05 1.3E-02 5.0E-05 1.3E-02 5.0E-05 9.0E-03
    (50, 20, 15) 2 2.4E-02 3.0E-05 3.2E-02 3.0E-05 2.7E-02 4.0E-05 1.8E-02
    (60, 30, 20) 2 6.9E-03 5.0E-05 7.9E-03 5.0E-05 7.4E-03 5.0E-05 5.7E-03
    (30, 20, 15) 3 2.3E-02 4.0E-05 3.3E-02 4.0E-05 2.6E-02 4.0E-05 1.8E-02
    (50, 20, 15) 3 7.9E-02 3.0E-05 1.5E-01 3.0E-05 7.9E-02 3.0E-05 5.4E-02
    (60, 30, 20) 3 1.6E-02 4.0E-05 2.0E-02 4.0E-05 1.8E-02 4.0E-05 1.2E-02
    EB
    0.5 (30, 20, 15) 1 5.5E-02 1.2E-04 6.8E-02 2.3E-04 6.2E-02 1.9E-03 2.5E-02
    (50, 20, 15) 1 4.7E-02 2.6E-04 6.2E-02 3.7E-04 5.6E-02 2.0E-03 1.8E-02
    (60, 30, 20) 1 3.2E-02 2.0E-05 4.2E-02 8.0E-05 3.8E-02 1.7E-03 1.4E-02
    (30, 20, 15) 2 5.8E-02 2.4E-04 7.1E-02 3.5E-04 6.5E-02 2.0E-03 2.7E-02
    (50, 20, 15) 2 4.6E-02 2.5E-04 6.0E-02 3.6E-04 5.4E-02 2.0E-03 1.7E-02
    (60, 30, 20) 2 3.0E-02 1.0E-05 3.7E-02 1.0E-04 3.5E-02 1.7E-03 1.1E-02
    (30, 20, 15) 3 5.8E-02 2.4E-04 7.1E-02 3.5E-04 6.5E-02 2.0E-03 2.7E-02
    (50, 20, 15) 3 1.0E-01 2.4E-04 1.5E-01 1.3E-04 1.2E-01 1.5E-03 6.6E-02
    (60, 30, 20) 3 4.2E-02 2.8E-04 5.4E-02 1.7E-04 5.0E-02 1.4E-03 2.6E-02
    1.5 (30, 20, 15) 1 3.0E-02 2.7E-04 3.1E-02 3.7E-04 2.9E-02 2.0E-03 8.8E-03
    (50, 20, 15) 1 5.6E-02 7.0E-05 6.2E-02 4.0E-05 5.7E-02 1.7E-03 2.8E-02
    (60, 30, 20) 1 2.6E-02 8.0E-06 2.9E-02 1.1E-04 2.7E-02 1.7E-03 1.2E-02
    (30, 20, 15) 2 4.0E-02 2.7E-04 4.2E-02 1.6E-04 4.0E-02 1.5E-03 1.8E-02
    (50, 20, 15) 2 5.0E-02 3.0E-05 6.0E-02 8.0E-05 5.5E-02 1.7E-03 2.6E-02
    (60, 30, 20) 2 3.0E-02 3.5E-04 3.2E-02 2.5E-04 3.0E-02 1.3E-03 1.5E-02
    (30, 20, 15) 3 5.1E-02 9.0E-05 5.9E-02 8.0E-06 5.4E-02 1.7E-03 2.8E-02
    (50, 20, 15) 3 8.8E-02 2.1E-04 1.3E-01 1.0E-04 1.1E-01 1.6E-03 5.7E-02
    (60, 30, 20) 3 4.2E-02 2.6E-04 5.2E-02 1.5E-04 4.9E-02 1.5E-03 2.6E-02
    2.5 (30, 20, 15) 1 3.6E-02 1.7E-04 3.7E-02 7.0E-05 3.5E-02 1.5E-03 1.6E-02
    (50, 20, 15) 1 5.0E-02 1.1E-04 5.7E-02 2.2E-04 5.3E-02 1.8E-03 2.5E-02
    (60, 30, 20) 1 3.1E-02 4.2E-04 3.4E-02 3.2E-04 3.2E-02 1.2E-03 1.8E-02
    (30, 20, 15) 2 4.2E-02 1.8E-04 4.4E-02 8.0E-05 4.1E-02 1.5E-03 2.1E-02
    (50, 20, 15) 2 6.1E-02 3.0E-04 7.1E-02 2.0E-04 6.5E-02 1.4E-03 3.4E-02
    (60, 30, 20) 2 2.6E-02 9.0E-05 2.8E-02 8.0E-06 2.7E-02 1.6E-03 1.2E-02
    (30, 20, 15) 3 5.4E-02 8.0E-05 6.4E-02 3.0E-05 5.8E-02 1.7E-03 3.2E-02
    (50, 20, 15) 3 9.7E-02 6.0E-05 1.4E-01 5.0E-05 1.2E-01 1.7E-03 6.3E-02
    (60, 30, 20) 3 4.5E-02 4.1E-04 5.6E-02 3.0E-04 5.2E-02 1.3E-03 2.8E-02

     | Show Table
    DownLoad: CSV
    Table 5.  The AL of 90% and 95% confidence intervals and corresponding CP for ˆαML and ˆαB based on the different censoring schemes.
    ˆαB
    ˆαML IP NIP
    90% 95% 90% 95% 90% 95%
    (n,m,k) Sch. AL CP AL CP AL CP AL CP AL CP AL CP
    T=0.5
    (30, 20, 15) 1 2.485 0.937 2.874 0.969 0.829 0.959 0.980 0.985 2.439 0.881 2.853 0.947
    (50, 20, 15) 1 2.610 0.949 3.008 0.964 0.753 0.966 0.896 0.990 2.554 0.897 3.015 0.944
    (60, 30, 20) 1 1.941 0.925 2.313 0.960 0.699 0.967 0.836 0.989 1.908 0.885 2.283 0.941
    (30, 20, 15) 2 2.479 0.927 2.892 0.959 0.826 0.949 0.981 0.985 2.412 0.876 2.857 0.931
    (50, 20, 15) 2 2.652 0.939 2.964 0.956 0.749 0.974 0.884 0.985 2.621 0.885 2.948 0.942
    (60, 30, 20) 2 1.907 0.917 2.272 0.949 0.697 0.965 0.825 0.988 1.861 0.883 2.233 0.925
    (30, 20, 15) 3 2.204 0.934 2.892 0.959 0.806 0.946 0.981 0.985 2.166 0.897 2.857 0.931
    (50, 20, 15) 3 3.342 0.947 3.935 0.970 0.725 0.967 0.859 0.994 3.575 0.874 4.316 0.931
    (60, 30, 20) 3 1.894 0.932 2.228 0.964 0.675 0.970 0.795 0.984 1.870 0.871 2.235 0.930
    T=1.5
    (30, 20, 15) 1 1.850 0.891 2.150 0.962 0.823 0.953 0.974 0.985 1.793 0.868 2.071 0.946
    (50, 20, 15) 1 2.235 0.943 2.686 0.954 0.749 0.964 0.892 0.984 2.173 0.895 2.622 0.910
    (60, 30, 20) 1 1.548 0.896 1.820 0.965 0.692 0.950 0.819 0.989 1.506 0.873 1.773 0.937
    (30, 20, 15) 2 1.831 0.904 2.204 0.961 0.821 0.963 0.973 0.981 1.769 0.875 2.144 0.936
    (50, 20, 15) 2 2.587 0.923 2.676 0.962 0.748 0.963 0.883 0.984 2.551 0.865 2.617 0.932
    (60, 30, 20) 2 1.519 0.919 1.846 0.960 0.684 0.966 0.816 0.990 1.483 0.893 1.791 0.932
    (30, 20, 15) 3 1.968 0.938 2.380 0.968 0.801 0.959 0.950 0.986 1.919 0.884 2.356 0.939
    (50, 20, 15) 3 3.419 0.943 3.753 0.971 0.728 0.961 0.859 0.988 3.635 0.861 4.000 0.935
    (60, 30, 20) 3 1.856 0.932 2.214 0.968 0.668 0.968 0.794 0.987 1.854 0.877 2.207 0.936
    T=2.5
    (30, 20, 15) 1 1.775 0.919 2.115 0.961 0.823 0.964 0.975 0.985 1.722 0.889 2.042 0.946
    (50, 20, 15) 1 2.185 0.950 2.584 0.964 0.752 0.965 0.885 0.985 2.134 0.882 2.519 0.926
    (60, 30, 20) 1 1.510 0.926 1.826 0.959 0.693 0.956 0.820 0.985 1.467 0.907 1.775 0.932
    (30, 20, 15) 2 1.852 0.897 2.158 0.954 0.817 0.953 0.977 0.979 1.819 0.875 2.104 0.925
    (50, 20, 15) 2 2.333 0.957 2.780 0.970 0.745 0.967 0.887 0.986 2.263 0.881 2.736 0.941
    (60, 30, 20) 2 1.504 0.918 1.803 0.965 0.684 0.964 0.816 0.985 1.461 0.899 1.759 0.932
    (30, 20, 15) 3 1.930 0.943 2.400 0.970 0.805 0.944 0.952 0.990 1.890 0.904 2.391 0.926
    (50, 20, 15) 3 3.492 0.946 3.870 0.962 0.725 0.967 0.856 0.987 3.638 0.854 4.114 0.914
    (60, 30, 20) 3 1.896 0.932 2.240 0.974 0.668 0.956 0.792 0.977 1.877 0.861 2.240 0.936

     | Show Table
    DownLoad: CSV
    Table 6.  The AL of 90% and 95% confidence intervals and corresponding CP for ˆβML and ˆβB based on the different censoring schemes.
    Bayesian
    ˆβML IP NIP
    90% 95% 90% 95% 90% 95%
    (n,m,k) Sch. AL CP AL CP AL CP AL CP AL CP AL CP
    T=0.5
    (30, 20, 15) 1 0.718 0.883 0.847 0.964 0.351 0.936 0.416 0.970 0.700 0.874 0.812 0.942
    (50, 20, 15) 1 0.641 0.886 0.753 0.949 0.303 0.922 0.358 0.972 0.619 0.859 0.721 0.932
    (60, 30, 20) 1 0.546 0.897 0.644 0.944 0.281 0.933 0.329 0.968 0.535 0.873 0.619 0.913
    (30, 20, 15) 2 0.717 0.890 0.851 0.954 0.350 0.936 0.413 0.971 0.692 0.875 0.816 0.917
    (50, 20, 15) 2 0.631 0.887 0.747 0.952 0.298 0.936 0.355 0.964 0.613 0.865 0.715 0.930
    (60, 30, 20) 2 0.539 0.900 0.642 0.956 0.279 0.943 0.328 0.963 0.524 0.887 0.614 0.930
    (30, 20, 15) 3 0.704 0.908 0.851 0.954 0.344 0.923 0.413 0.971 0.685 0.874 0.816 0.917
    (50, 20, 15) 3 0.706 0.888 0.831 0.951 0.290 0.924 0.342 0.968 0.680 0.845 0.793 0.919
    (60, 30, 20) 3 0.533 0.907 0.640 0.953 0.270 0.941 0.321 0.971 0.519 0.882 0.616 0.916
    T=1.5
    (30, 20, 15) 1 0.605 0.899 0.712 0.943 0.352 0.944 0.414 0.965 0.588 0.880 0.683 0.924
    (50, 20, 15) 1 0.573 0.898 0.679 0.941 0.303 0.938 0.360 0.971 0.559 0.878 0.652 0.913
    (60, 30, 20) 1 0.463 0.877 0.551 0.926 0.279 0.921 0.330 0.967 0.452 0.865 0.535 0.909
    (30, 20, 15) 2 0.599 0.894 0.715 0.953 0.347 0.939 0.412 0.972 0.586 0.871 0.695 0.933
    (50, 20, 15) 2 0.631 0.900 0.680 0.951 0.299 0.938 0.353 0.966 0.612 0.881 0.654 0.937
    (60, 30, 20) 2 0.463 0.889 0.551 0.950 0.279 0.941 0.326 0.964 0.454 0.862 0.531 0.928
    (30, 20, 15) 3 0.643 0.904 0.762 0.961 0.346 0.936 0.404 0.977 0.625 0.884 0.736 0.945
    (50, 20, 15) 3 0.695 0.907 0.825 0.955 0.287 0.931 0.343 0.968 0.672 0.859 0.784 0.917
    (60, 30, 20) 3 0.534 0.901 0.633 0.949 0.271 0.936 0.320 0.973 0.520 0.873 0.607 0.928
    T=2.5
    (30, 20, 15) 1 0.586 0.900 0.697 0.940 0.352 0.933 0.415 0.964 0.575 0.883 0.672 0.920
    (50, 20, 15) 1 0.558 0.910 0.668 0.947 0.301 0.935 0.357 0.951 0.540 0.875 0.643 0.912
    (60, 30, 20) 1 0.454 0.899 0.545 0.930 0.278 0.935 0.330 0.968 0.444 0.881 0.526 0.909
    (30, 20, 15) 2 0.594 0.891 0.698 0.947 0.349 0.933 0.414 0.965 0.582 0.857 0.676 0.915
    (50, 20, 15) 2 0.576 0.889 0.683 0.952 0.301 0.941 0.353 0.961 0.559 0.857 0.656 0.935
    (60, 30, 20) 2 0.460 0.873 0.539 0.952 0.279 0.921 0.325 0.969 0.446 0.855 0.525 0.934
    (30, 20, 15) 3 0.640 0.903 0.766 0.952 0.348 0.924 0.405 0.973 0.624 0.878 0.735 0.928
    (50, 20, 15) 3 0.703 0.896 0.828 0.944 0.289 0.937 0.343 0.960 0.672 0.861 0.782 0.905
    (60, 30, 20) 3 0.535 0.910 0.638 0.951 0.270 0.930 0.322 0.968 0.521 0.886 0.616 0.922

     | Show Table
    DownLoad: CSV
    Table 7.  The AL of 90% and 95% confidence intervals and corresponding CP for ^R(t)ML and ^R(t)B based on the different censoring schemes.
    ^R(t)B
    ^R(t)ML IP NIP
    90% 95% 90% 95% 90% 95%
    (n,m,k) Sch. AL CP AL CP AL CP AL CP AL CP AL CP
    T=0.5
    (30, 20, 15) 1 0.059 0.669 0.073 0.712 0.014 1.000 0.017 1.000 0.092 0.887 0.123 0.940
    (50, 20, 15) 1 0.075 0.701 0.094 0.726 0.014 1.000 0.017 1.000 0.109 0.884 0.144 0.939
    (60, 30, 20) 1 0.061 0.730 0.073 0.758 0.014 1.000 0.017 1.000 0.082 0.872 0.105 0.936
    (30, 20, 15) 2 0.059 0.675 0.075 0.687 0.014 1.000 0.018 1.000 0.093 0.886 0.125 0.934
    (50, 20, 15) 2 0.074 0.685 0.095 0.716 0.014 1.000 0.018 1.000 0.106 0.869 0.143 0.945
    (60, 30, 20) 2 0.061 0.740 0.074 0.767 0.014 1.000 0.017 1.000 0.084 0.886 0.104 0.925
    (30, 20, 15) 3 0.067 0.711 0.075 0.687 0.014 0.999 0.018 1.000 0.096 0.884 0.125 0.934
    (50, 20, 15) 3 0.060 0.607 0.076 0.652 0.014 1.000 0.017 1.000 0.088 0.854 0.120 0.917
    (60, 30, 20) 3 0.047 0.707 0.057 0.721 0.014 1.000 0.017 1.000 0.062 0.866 0.080 0.927
    T=1.5
    (30, 20, 15) 1 0.049 0.717 0.063 0.764 0.014 1.000 0.017 1.000 0.071 0.868 0.095 0.937
    (50, 20, 15) 1 0.049 0.661 0.065 0.713 0.014 1.000 0.017 1.000 0.075 0.890 0.103 0.910
    (60, 30, 20) 1 0.040 0.738 0.050 0.781 0.014 1.000 0.017 1.000 0.053 0.879 0.070 0.920
    (30, 20, 15) 2 0.049 0.722 0.057 0.716 0.014 1.000 0.017 1.000 0.070 0.875 0.089 0.935
    (50, 20, 15) 2 0.075 0.664 0.063 0.716 0.014 1.000 0.017 1.000 0.107 0.877 0.099 0.931
    (60, 30, 20) 2 0.040 0.746 0.047 0.746 0.014 1.000 0.017 1.000 0.053 0.878 0.067 0.921
    (30, 20, 15) 3 0.046 0.696 0.053 0.728 0.014 1.000 0.017 1.000 0.066 0.873 0.085 0.942
    (50, 20, 15) 3 0.063 0.622 0.075 0.677 0.014 1.000 0.017 1.000 0.092 0.857 0.122 0.924
    (60, 30, 20) 3 0.045 0.730 0.053 0.741 0.014 1.000 0.017 1.000 0.059 0.871 0.076 0.928
    T=2.5
    (30, 20, 15) 1 0.045 0.718 0.053 0.748 0.014 1.000 0.017 1.000 0.064 0.883 0.083 0.930
    (50, 20, 15) 1 0.051 0.689 0.064 0.722 0.014 1.000 0.017 1.000 0.075 0.874 0.101 0.925
    (60, 30, 20) 1 0.038 0.757 0.045 0.764 0.014 1.000 0.016 1.000 0.051 0.912 0.064 0.920
    (30, 20, 15) 2 0.042 0.695 0.052 0.708 0.014 1.000 0.017 1.000 0.060 0.870 0.080 0.919
    (50, 20, 15) 2 0.049 0.668 0.056 0.697 0.014 0.999 0.017 1.000 0.074 0.861 0.094 0.941
    (60, 30, 20) 2 0.037 0.752 0.045 0.789 0.014 1.000 0.016 1.000 0.050 0.878 0.065 0.925
    (30, 20, 15) 3 0.044 0.716 0.053 0.712 0.014 1.000 0.017 1.000 0.065 0.911 0.084 0.932
    (50, 20, 15) 3 0.060 0.622 0.076 0.667 0.014 1.000 0.017 1.000 0.089 0.849 0.121 0.918
    (60, 30, 20) 3 0.044 0.715 0.053 0.717 0.014 1.000 0.017 1.000 0.059 0.864 0.076 0.930

     | Show Table
    DownLoad: CSV
    Table 8.  The AL of 90% and 95% confidence intervals and corresponding CP for ^h(t)ML and ^h(t)B based on the different censoring schemes.
    ^h(t)B
    ^h(t)ML IP NIP
    90% 95% 90% 95% 90% 95%
    (n,m,k) Sch. AL CP AL CP AL CP AL CP AL CP AL CP
    T=0.5
    (30, 20, 15) 1 0.410 0.669 0.467 0.712 0.068 1.000 0.081 1.000 0.413 0.890 0.484 0.944
    (50, 20, 15) 1 0.412 0.701 0.464 0.726 0.068 1.000 0.080 1.000 0.412 0.880 0.486 0.937
    (60, 30, 20) 1 0.299 0.730 0.355 0.758 0.067 1.000 0.080 1.000 0.300 0.870 0.362 0.940
    (30, 20, 15) 2 0.409 0.675 0.475 0.687 0.068 1.000 0.081 1.000 0.405 0.883 0.489 0.930
    (50, 20, 15) 2 0.422 0.685 0.458 0.716 0.068 1.000 0.080 1.000 0.429 0.872 0.475 0.936
    (60, 30, 20) 2 0.290 0.740 0.346 0.767 0.067 1.000 0.080 1.000 0.286 0.884 0.349 0.926
    (30, 20, 15) 3 0.362 0.711 0.475 0.687 0.068 1.000 0.081 1.000 0.365 0.882 0.489 0.930
    (50, 20, 15) 3 0.577 0.607 0.678 0.652 0.068 1.000 0.081 1.000 0.670 0.855 0.821 0.915
    (60, 30, 20) 3 0.303 0.707 0.358 0.721 0.067 1.000 0.079 1.000 0.304 0.865 0.372 0.929
    T=1.5
    (30, 20, 15) 1 0.279 0.717 0.318 0.764 0.067 1.000 0.079 1.000 0.270 0.872 0.306 0.934
    (50, 20, 15) 1 0.344 0.661 0.417 0.713 0.067 1.000 0.080 1.000 0.339 0.892 0.417 0.907
    (60, 30, 20) 1 0.228 0.738 0.269 0.781 0.066 1.000 0.078 1.000 0.222 0.874 0.264 0.919
    (30, 20, 15) 2 0.275 0.722 0.333 0.716 0.067 0.999 0.080 1.000 0.265 0.873 0.327 0.936
    (50, 20, 15) 2 0.406 0.664 0.411 0.716 0.068 1.000 0.080 1.000 0.412 0.882 0.414 0.930
    (60, 30, 20) 2 0.226 0.746 0.275 0.746 0.066 1.000 0.078 1.000 0.221 0.874 0.267 0.922
    (30, 20, 15) 3 0.321 0.696 0.388 0.728 0.067 1.000 0.080 1.000 0.317 0.877 0.395 0.943
    (50, 20, 15) 3 0.585 0.622 0.638 0.677 0.068 1.000 0.081 1.000 0.673 0.860 0.731 0.927
    (60, 30, 20) 3 0.300 0.730 0.355 0.741 0.067 1.000 0.079 1.000 0.307 0.871 0.366 0.933
    T=2.5
    (30, 20, 15) 1 0.264 0.718 0.314 0.748 0.067 1.000 0.079 1.000 0.255 0.878 0.303 0.920
    (50, 20, 15) 1 0.332 0.689 0.397 0.722 0.067 1.000 0.079 1.000 0.330 0.877 0.395 0.924
    (60, 30, 20) 1 0.222 0.757 0.271 0.764 0.066 1.000 0.078 1.000 0.215 0.905 0.266 0.913
    (30, 20, 15) 2 0.288 0.695 0.325 0.708 0.067 1.000 0.079 1.000 0.287 0.873 0.318 0.917
    (50, 20, 15) 2 0.369 0.668 0.434 0.697 0.068 1.000 0.080 1.000 0.363 0.874 0.439 0.942
    (60, 30, 20) 2 0.225 0.752 0.264 0.789 0.065 1.000 0.078 1.000 0.216 0.873 0.260 0.923
    (30, 20, 15) 3 0.309 0.716 0.394 0.712 0.067 1.000 0.080 1.000 0.306 0.908 0.408 0.934
    (50, 20, 15) 3 0.603 0.622 0.668 0.667 0.068 1.000 0.081 1.000 0.669 0.852 0.775 0.918
    (60, 30, 20) 3 0.306 0.715 0.362 0.717 0.067 1.000 0.079 1.000 0.309 0.871 0.375 0.931

     | Show Table
    DownLoad: CSV

    From Tables 14 the computational results show that most cases, the Bayesian estimation based on the SE, Linex and GE loss functions is more precise than the ML estimation. Also, when n and m increase, the mean-squared error decreases. An exceptional case occurred when (n,m,k)=(50,20,15), it does not follow the pattern due to the effective number of failures is changed according to the occurred case. The effective sample size in this case may be 15, 20 or 15<D<20. Moreover, a comparison of the results for the informative priors with the corresponding ones, as we would expect. Finally, from the average length and coverage probabilities presented in Tables 58, we see that the estimates behave well in terms of the coverage probabilities and the Bayesian show better performance comparing with the ML estimates in terms of the average width.

    In order to show the performance of the inferential results established for the Burr Type-Ⅻ distribution bases on the generalized progressive hybrid censoring, we consider here the real data set that is used in Example 1, which is reported in Wingo [20]. Wing assumed that Burr Type-Ⅻ distribution is used to fit these lifetime data. The test performed using 30 units but its terminated after the failure of 20 units. We shall use these data to consider the following progressive censoring schemes: Suppose n=20,m=18and Ri=0 for i=1,...,16, R17=R18=1, then we would have the following progressive data: 0.1, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.9, 1.2, 1.6, 1.8, 2.3, 2.05, 2.6, and 2.9. We consider different k=16, and with different values of T, we have three different generalized PHCS, namely;

    1. Scheme 1: Suppose T=2, since T<X16:18:20<X18:18:20, then the experiment would have terminated at X16:18:20, with Ri=0 for i=1,...,15, R16=4, Rτ=0, and we would have the following data: 0.1, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.9, 1.2, 1.6, 1.8, 2.3, and 2.05.

    2. cheme 2: Suppose T=2.7, since X16:18:20<T<X18:18:20, then the experiment would have terminated at T=2, with Ri=0 for i=1,...,16, R17=1, Rτ=2, and we would have the following data: 0.1, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.9, 1.2, 1.6, 1.8, 2.3, 2.05, and 2.6.

    3. Scheme 3: Suppose T=3.5, since X16:18:20<X18:18:20<T, then the experiment would have terminated at X18:18:20, with R=R, Rτ=0, and we would have the following data: 0.1, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.9, 1.2, 1.6, 1.8, 2.3, 2.05, 2.6, and 2.9.

    The ML and Bayesian estimates for the parameters, survival and hazard functions based on the generalized PHCS are obtained and presented in Table 9. The 90% and 95% asymptotic confidence interval and the credible intervals are constructed and presented in Table 10. Also, the point predictor and 95% equi-tailed and the HPD prediction intervals are computed for Ys::N from the future progressively censored sample of size =5 from a sample of size N=10 with progressive censoring scheme S=(0,2,1,2,0) based on the generated generalized PHCS and two different choices of the hyper parameters as given in Section 7, these data are presented in Table 11.

    Table 9.  The ML and Bayesian estimates of ˆα, ˆβ, ^R(t), and ^h(t) at selected censoring schemes from the real data set.
    Bayesian
    BS BL BE
    estimator Scheme ML IP NIP IP NIP IP NIP
    ˆα 1 1.082 1.393 1.074 1.378 1.055 1.361 1.020
    2 1.071 1.397 1.073 1.382 1.056 1.365 1.024
    3 1.110 1.406 1.102 1.391 1.084 1.374 1.052
    ˆβ 1 1.436 0.979 1.435 0.975 1.411 0.967 1.384
    2 1.422 0.981 1.412 0.976 1.392 0.966 1.367
    3 1.469 0.999 1.489 0.994 1.466 0.983 1.441
    ^R(t=1) 1 0.472 0.386 0.484 0.385 0.482 0.378 0.470
    2 0.476 0.385 0.483 0.384 0.481 0.377 0.471
    3 0.463 0.383 0.474 0.381 0.472 0.374 0.462
    ^h(t=1) 1 0.777 0.675 0.761 0.671 0.749 0.661 0.710
    2 0.762 0.678 0.748 0.674 0.737 0.663 0.704
    3 0.815 0.694 0.807 0.691 0.796 0.679 0.764

     | Show Table
    DownLoad: CSV
    Table 10.  The C.I's and AL for ˆα, ˆβ, ^R(t), and ^h(t) at selected censoring schemes from real data set.
    Bayesian
    Asymp.CI IP NIP
    estimator Scheme 90% 95% 90% 95% 90% 95%
    ˆα 1 (0.626, 1.538) (0.538, 1.626) (0.675, 1.579) (0.948, 1.931) (0.678, 1.584) (0.610, 1.691)
    0.913 1.087 0.904 0.983 0.906 1.081
    2 (0.626, 1.515) (0.541, 1.601) (1.011, 1.840) (0.963, 1.921) (0.678, 1.561) (0.624, 1.652)
    0.889 1.060 0.830 0.959 0.883 1.028
    3 (0.659, 1.560) (0.573, 1.646) (1.030, 1.839) (0.971, 1.935) (0.705, 1.612) (0.638, 1.694)
    0.901 1.073 0.809 0.964 0.907 1.056
    ˆβ 1 (0.932, 1.939) (0.836, 2.036) (0.978, 1.965) (0.727, 1.275) (0.958, 1.888) (0.880, 2.053)
    1.007 1.200 0.987 0.548 0.931 1.173
    2 (0.933, 1.911) (0.840, 2.005) (0.776, 1.215) (0.733, 1.263) (0.961, 1.924) (0.892, 2.020)
    0.978 1.165 0.440 0.530 0.963 1.127
    3 (0.978, 1.960) (0.884, 2.054) (0.784, 1.220) (0.743, 1.297) (1.029, 1.962) (0.953, 2.172)
    0.982 1.170 0.436 0.555 0.933 1.219
    ^St 1 (0.323, 0.622) (0.294, 0.650) (0.334, 0.626) (0.262, 0.518) (0.333, 0.625) (0.309, 0.655)
    0.299 0.356 0.292 0.256 0.291 0.345
    2 (0.329, 0.623) (0.301, 0.651) (0.279, 0.496) (0.264, 0.512) (0.338, 0.625) (0.317, 0.648)
    0.293 0.350 0.217 0.248 0.287 0.332
    3 (0.319, 0.608) (0.291, 0.636) (0.279, 0.490) (0.261, 0.510) (0.327, 0.613) (0.309, 0.642)
    0.289 0.345 0.210 0.249 0.286 0.334
    ^Ht 1 (0.400, 1.153) (0.328, 1.225) (0.440, 1.186) (0.473, 0.922) (0.432, 1.152) (0.393, 1.248)
    0.753 0.898 0.745 0.449 0.719 0.855
    2 (0.411, 1.112) (0.344, 1.179) (0.500, 0.874) (0.474, 0.926) (0.442, 1.139) (0.396, 1.220)
    0.701 0.835 0.375 0.451 0.697 0.824
    3 (0.454, 1.176) (0.385, 1.245) (0.516, 0.890) (0.480, 0.939) (0.486, 1.191) (0.446, 1.281)
    0.722 0.860 0.374 0.459 0.705 0.835

     | Show Table
    DownLoad: CSV
    Table 11.  Bayesian point predictor and 95% ET and HPD prediction intervals Ys::N for s=1,...,.
    IP NIP
    Sch. s ˆYs:N ET interval HPD interval ˆYs:N ET interval HPD interval
    Case-1 1 0.077 (0.001, 0.328) (0.055, 0.228) 0.194 (0.008, 0.619) (0.001, 0.142)
    0.327 0.173 0.611 0.142
    2 0.169 (0.012, 0.571) (0.000, 0.462) 0.353 (0.048, 0.928) (0.009, 0.793)
    0.559 0.462 0.88 0.784
    3 0.330 (0.039, 1.032) (0.006, 0.838) 0.580 (0.118, 1.480) (0.047, 1.252)
    0.993 0.832 1.362 1.205
    4 0.633 (0.093, 2.030) (0.023, 1.607) 0.955 (0.228, 2.642) (0.101, 2.129)
    1.937 1.584 2.415 2.028
    5 3.908 (0.270, 28.755) (0.028, 15.792) 4.166 (0.502, 30.960) (0.108, 16.274)
    28.485 15.764 30.457 16.166
    Case-2 1 0.078 (0.001, 0.331) (0.058, 0.309) 0.194 (0.008, 0.621) (0.001, 0.124)
    0.33 0.251 0.613 0.124
    2 0.171 (0.012, 0.574) (0.000, 0.466) 0.353 (0.046, 0.935) (0.008, 0.798)
    0.562 0.466 0.889 0.79
    3 0.333 (0.040, 1.035) (0.006, 0.842) 0.582 (0.116, 1.496) (0.044, 1.264)
    0.995 0.836 1.381 1.22
    4 0.636 (0.095, 2.028) (0.024, 1.608) 0.964 (0.224, 2.679) (0.097, 2.157)
    1.932 1.584 2.455 2.06
    5 3.879 (0.270, 28.090) (0.028, 15.518) 4.254 (0.500, 32.111) (0.104, 16.829)
    27.82 15.49 31.61 16.725
    Case-3 1 0.079 (0.001, 0.332) (0.004, 0.318) 0.188 (0.008, 0.596) (0.000, 0.159)
    0.331 0.314 0.588 0.158
    2 0.173 (0.013, 0.569) (0.000, 0.464) 0.341 (0.047, 0.887) (0.010, 0.762)
    0.557 0.464 0.839 0.752
    3 0.331 (0.042, 1.013) (0.007, 0.829) 0.556 (0.116, 1.392) (0.049, 1.189)
    0.971 0.821 1.276 1.14
    4 0.626 (0.098, 1.952) (0.027, 1.561) 0.905 (0.221, 2.422) (0.105, 1.984)
    1.854 1.534 2.201 1.879
    5 3.707 (0.276, 24.449) (0.034, 13.319) 3.823 (0.485, 25.068) (0.117, 13.824)
    24.173 13.285 24.583 13.707

     | Show Table
    DownLoad: CSV

    The Bayesian results are computed based the joint prior density in (4.1) with two different choices of the hyper parameters (a,b,c,d), namely,

    1. Informative prior (IP):a=24.2, b=29.2, c=18.432 and d=15.0685 (These values have obtained by letting the mean and variance for the marginal prior distribution of α equal ˆαML in case of complete data and 0.05, respectively. Also, the mean and variance for the marginal prior distribution of β equal ˆβML of the complete data and 0.05, respectively, then we solve the resulting two equations for each parameter).

    2. Non-Informative prior (NIP):a=b=c=d=0.

    Form Table 9, we noted that MLE is slightly larger than the Bayesian estimates based on NIP in the case of α and β. When it is compared with the Bayesian estimates based on the IP, we noted that its value is smaller than the Bayesian estimates of α and larger than the Bayesian estimates of β. From Table 10, in most cases, we noted the average length of the intervals is smaller in scheme 2 but not far from the other schemes. From Table 11, the the highest posterior density intervals of unobserved future sample gives accurate results than those of equi-tailed intervals, while the point predictor values are met our expectation compared with the original real data.

    In general, we can conclude that the generalized PHCS is applicable when we are going to test the Burr Type-Ⅻ failure times of the electronic components. It is more suitable for saving the time and cost of the lifetime experiments.

    The ML and Bayesian estimates of the unknown parameters as well as the survival and hazard functions of the Burr Type-Ⅻ lifetime distribution are obtained when the observed sample is a generalized PHCS sample. The existence and uniqueness of the MLEs are investigated. In the Bayesian approach Squared error, Linex and general entropy loss functions based in informative and non-informative prior distributions are considered. The 90% and 95% asymptotic and credible confidence intervals are also constructed for the parameters as well as for the survival and hazard functions. The Bayesian point and interval prediction of future order statistics from the same sample were also developed for a progressive Type-Ⅱ of an unpredictable future sample. From the numerical results, we list the following concluding remarks:

    1. The MLEs of the unknown parameters of the Burr Type-Ⅻ lifetime distribution based on the generalized PHCS sample are exist and unique if and only if there are some values below 1 in the sample.

    2. In most cases, the Bayesian estimates depending on the informative priors perform better then the MLEs.

    3. The results of ML estimates can be seen in Tables 14, as predicted, very similar to that of non-informational priors based on Bayesian estimators. Thus it is often easier to use the ML rather than the Bayesian estimators when we do not have preliminary knowledge about unknown parameters because the Bayesian estimators are computing more expensive.

    4. In most cases, as n and m increase, the MSE decreases.

    5. The average length of the confidence intervals is decreasing when the T increases. Further, the comparison of the results for the informative priors with those for non-informative priors indicates that the previous produces more accurate results, as we expect. In addition, when n and m increasing, the average length decreases.

    6. The credible intervals perform well as compared with the asymptotic confidence intervals.

    7. In all cases of confidence intervals, the 95% is wider than the 90% ones, as expected.

    All in all, the proposed techniques depending on the generalized PHCS can be applied for testing the failure times of the products and electronic components which have Burr Type-Ⅻ failure times. Which will guarantee saving the testing time and decreasing the testing cost with a good efficiency of the analysis.

    The authors are grateful to the anonymous referee for a careful checking of the details and for helpful comments that improved this paper. Also, The authors would like to thank the Deanship of Scientific Research at King Saud University for its funding this Research Group (RG-1435-056).

    Authors declare there is no conflict of interest.



    [1] Chaperon I (1986) Theoretical Study of Coning Toward Horizontal and Vertical Wells in Anisotropic Formations: Subcritical and Critical Rates. SPE Annu Tech Conf Exhib, 5-8.
    [2] Chierici GL, Ciucci GM, Pizzi G (1964) A Systematic Study of Gas and Water Coning By Potentiometric Models. J Pet Technol 16: 923-929. doi: 10.2118/871-PA
    [3] Wheatley MJ (1985) An Approximate Theory of Oil/Water Coning. SPE Annual Technical Conference and Exhibition, Las Vegas, Nevada, USA, 22-26.
    [4] Al-Sikaiti SH, Regtien J (2008) Challenging Conventional Wisdom, Waterflooding Experience on Heavy Oil Fields in Southern Oman. World Heavy Oil Congr: 10-12.
    [5] Karami M, Khaksar Manshad A, Ashoori S (2014) The Prediction of Water Breakthrough Time and Critical Rate with a New Equation for an Iranian Oil Field. Pet Sci Technol 32: 211-216. doi: 10.1080/10916466.2011.586960
    [6] Jiang X (2011) A review of physical modelling and numerical simulation of long-term geological storage of CO2. Appl Energy 88: 3557-3566. doi: 10.1016/j.apenergy.2011.05.004
    [7] Jung JY, Huh C, Kang SG, et al. (2013) CO2 transport strategy and its cost estimation for the offshore CCS in Korea. Appl Energy 111: 1054-1060. doi: 10.1016/j.apenergy.2013.06.055
    [8] Buscheck TA, White JA, Chen M, et al. (2014) Pre-injection brine production for managing pressure in compartmentalized CO2 storage reservoirs. Energy Procedia 63, 5333-5340.
    [9] González-Nicolás A, Cihan A, Petrusak R, et al. (2019) Pressure management via brine extraction in geological CO2 storage: Adaptive optimization strategies under poorly characterized reservoir conditions. Int J Greenhouse Gas Control 83: 176-185. doi: 10.1016/j.ijggc.2019.02.009
    [10] Pongtepupathum W, Williams J, Krevor S, et al. (2017) Optimising Brine Production for Pressure Management During CO2 sequestration in the Bunter Sandstone of the UK Southern North Sea. Soc Pet Eng.
    [11] Tarrahi M, Afra S (2015) Optimization of Geological Carbon Sequestration in Heterogeneous Saline Aquifers through Managed Injection for Uniform CO2 Distribution. Carbon Management Technology Conference.
    [12] Liao C, Liao X, Mu L, et al. (2017) Improving water-alternating-CO2 flooding of heterogeneous, low permeability oil reservoirs using ensemble optimisation algorithm. Int J Global Warming 12: 242-260. doi: 10.1504/IJGW.2017.084509
    [13] Shamshiri H, Jafarpour B (2010) Optimization of Geologic CO2 Storage in Heterogeneous Aquifers Through Improved Sweep Efficiency. SPE International Conference on CO2 Capture, Storage, and Utilization held in New Orleans, Louisiana, 10-12.
    [14] Kazakis N, Pavlou A, Vargemezis G, et al. (2016) Seawater intrusion mapping using electrical resistivity tomography and hydrochemical data. An application in the coastal area of eastern Thermaikos Gulf, Greece. Sci Total Environ 543: 373-387.
    [15] Goldman M, Kafri U (2006) Hydrogeophysical applications in coastal aquifers. In: Vereecken H, Author, Applied Hydrogeophysics, Eds., Springer: Dordrecht, The Netherlands, 233-254.
    [16] Kuras O, Pritchard J, Meldrum P, et al. (2009) Monitoring hydraulic processes with automated time-lapse electrical resistivity tomography (ALERT): Compt Rendus Geosci 341: 868-885.
    [17] Dell'Aversana P, Rizzo E, Servodio R (2017) 4D borehole electric tomography for hydrocarbon reservoir monitoring. EAGE Conference and Exhibition, 2017: 1-5.
    [18] McNeice GW, Colombo D (2018) 3D inversion of surface to borehole CSEM for waterflood monitoring. SEG Int Expo Ann Meet, 878-880.
    [19] Bergmann P, Schmidt-Hattenberger C, Kiessling D, et al. (2012) Surface-downhole electrical resistivity tomography applied to monitoring of CO2 storage at Ketzin, Germany. Geophysics 77: B253-B267.
    [20] Bergmann P, Schmidt-Hattenberger C, Labitzke1 T, et al. (2017) Fluid injection monitoring using electrical resistivity tomography—five years of CO2 injection at Ketzin, Germany. Geophys Prospect 65: 859-875. doi: 10.1111/1365-2478.12426
    [21] Schmidt-Hattenberger C, Bergmann P, Bö sing D, et al. (2013) Electrical resistivity tomography (ERT) for monitoring of CO2 migration-from tool development to reservoir surveillance at the Ketzinpilot site. Energy Procedia 37: 4268-4275. doi: 10.1016/j.egypro.2013.06.329
    [22] Descloitres M, Ribolzi O, Le Troquer Y (2003) Study of infiltration in a Sahelian gully erosion area using time-lapse resistivity mapping. Catena 53: 229-253. doi: 10.1016/S0341-8162(03)00038-9
    [23] Dell'Aversana P, Servodio R, Bottazzi F, et al. (2019_a) Asset Value Maximization through a Novel Well Completion System for 3d Time Lapse Electromagnetic Tomography Supported by Machine Learning. Abu Dhabi Int Pet Exhib Conf.
    [24] Dell'Aversana P, Servodio R, Bottazzi F, et al, (2019_b). Asset Value Maximization through a Novel Well Completion System for 3d Time Lapse Electromagnetic Tomography Supported by Machine Learning. Soc Pet Eng J.
    [25] Bottazzi F, Dell'Aversana P, Molaschi C, et al. (2020) A New Downhole System for Real Time Reservoir Fluid Distribution Mapping: E-REMM, the Eni-Reservoir Electro-Magnetic Mapping System. Int Pet Technol Conf.
    [26] Brown RG (1956) Exponential Smoothing for Predicting Demand. Cambridge, Massachusetts: Arthur D. Little Inc 15.
    [27] Saad EW, Prokhorov DV, Wunsch DC (1998) Comparative study of stock trend prediction using time delay, recurrent and probabilistic neural networks. IEEE T Neural Network 9: 1456-1470. doi: 10.1109/72.728395
    [28] Tealab A (2018) Time series forecasting using artificial neural networks methodologies: A systematic review. Future Comput Inform J 3: 334-340. doi: 10.1016/j.fcij.2018.10.003
    [29] Sherstinsky A (2020) Fundamentals of Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) Network. Phys D 404: 132306. doi: 10.1016/j.physd.2019.132306
    [30] Kaelbling LP, Littman ML, Moore AW (1996) Reinforcement Learning: A Survey. J Artif Intell Res 4: 237-285. doi: 10.1613/jair.301
    [31] Raschka S, Mirjalili V (2017) Python Machine Learning: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow, 2nd Ed., PACKT Books.
    [32] Russell S, Norvig P (2016) Artificial Intelligence: A Modern approach, Global Edition, Pearson Education, Inc., publishing as Prentice Hall.
    [33] Machine learning workflows. Multidisciplinary applications using Python, 2021. Available from: https://www.researchgate.net/publication/348741974_Q_Learning_generic.
    [34] Benson SM, Surles T (2006) Carbon dioxide capture and storage: an overview with emphasis on capture and storage in deep geological formations. Proc IEEE 94: 1795-1805. doi: 10.1109/JPROC.2006.883718
    [35] Christensen NB, Sherlock D, Dodds K (2006) Monitoring CO2 injection with cross-hole electrical resistivity tomography. Explor Geophys 37: 44-49. doi: 10.1071/EG06044
    [36] LaBrecque DJ, Miletto M, Daily W, et al. (1996) The effects of noise on Occam's inversion of resistivity tomography data. Geophysics 61: 538-548. doi: 10.1190/1.1443980
    [37] Dell'Aversana P, Carbonara S, Vitale S, et al (2011) Quantitative estimation of oil saturation from marine CSEM data: A case history. First Break 29.
    [38] Befus KM (2017) Pyres: A Python Wrapper for Electrical Resistivity Modeling with R2. J Geophys Eng 15.
    [39] Binley A, A Kemna (2005) Electrical Methods, In: Hydrogeophysics by Rubin and Hubbard Eds., Springer, 129-156.
    [40] Binley A (2015) Tools and Techniques: DC Electrical Methods, In: Treatise on Geophysics, 2nd Ed., Schubert: Elsevier, 233-259.
    [41] Binley A (2016) R2 version 3.1 Manual. Lancaster, UK. Available from: http://es.lancs.ac.uk/people/amb/Freeware/freeware.htm.
    [42] PUNQ-S3 reservoir model, Imperial College of London. Available from: https://www.imperial.ac.uk/earth-science/research/research-groups/perm/standard-models/.
    [43] Archie GE (1950) Introduction to petrophysics of reservoir rocks. AAPG Bulletin 34: 943-961.
    [44] Claerbout JF, Muir F (1973) Robust modeling with erratic data. Geophysics 18: 826-844. doi: 10.1190/1.1440378
    [45] Sagheer A, Kotb M (2019) Time series forecasting of petroleum production using deep LSTM recurrent networks. Neurocomputing 323: 203-213. doi: 10.1016/j.neucom.2018.09.082
  • This article has been cited by:

    1. 2023, 9780123983879, 361, 10.1016/B978-0-12-398387-9.00023-4
    2. M. Nagy, Adel Fahad Alrasheedi, The lifetime analysis of the Weibull model based on Generalized Type-I progressive hybrid censoring schemes, 2022, 19, 1551-0018, 2330, 10.3934/mbe.2022108
    3. M. Nagy, M. E. Bakr, Adel Fahad Alrasheedi, Firdous Khan, Analysis with Applications of the Generalized Type-II Progressive Hybrid Censoring Sample from Burr Type-XII Model, 2022, 2022, 1563-5147, 1, 10.1155/2022/1241303
    4. M. Nagy, M. H. Abu-Moussa, Adel Fahad Alrasheedi, A. Rabie, Expected Bayesian estimation for exponential model based on simple step stress with Type-I hybrid censored data, 2022, 19, 1551-0018, 9773, 10.3934/mbe.2022455
    5. Abd El-Raheem M. Abd El-Raheem, Mona Hosny, Mahmoud H. Abu-Moussa, On Progressive Censored Competing Risks Data: Real Data Application and Simulation Study, 2021, 9, 2227-7390, 1805, 10.3390/math9151805
    6. Omar Alzeley, Ehab M. Almetwally, Ahmed M. Gemeay, Huda M. Alshanbari, E. H. Hafez, M. H. Abu-Moussa, Ahmed Mostafa Khalil, Statistical Inference under Censored Data for the New Exponential-X Fréchet Distribution: Simulation and Application to Leukemia Data, 2021, 2021, 1687-5273, 1, 10.1155/2021/2167670
    7. M. Nagy, Adel Fahad Alrasheedi, Behnaz Ghoraani, Estimations of Generalized Exponential Distribution Parameters Based on Type I Generalized Progressive Hybrid Censored Data, 2022, 2022, 1748-6718, 1, 10.1155/2022/8058473
    8. Yeongjae Seong, Kyeongjun Lee, Exact Likelihood Inference for Parameter of Exponential Distribution under Combined Generalized Progressive Hybrid Censoring Scheme, 2022, 14, 2073-8994, 1764, 10.3390/sym14091764
    9. Manal M. Yousef, Amal S. Hassan, Huda M. Alshanbari, Abdal-Aziz H. El-Bagoury, Ehab M. Almetwally, Bayesian and Non-Bayesian Analysis of Exponentiated Exponential Stress–Strength Model Based on Generalized Progressive Hybrid Censoring Process, 2022, 11, 2075-1680, 455, 10.3390/axioms11090455
    10. M. Nagy, Adel Fahad Alrasheedi, Muye Pang, Classical and Bayesian Inference Using Type-II Unified Progressive Hybrid Censored Samples for Pareto Model, 2022, 2022, 1754-2103, 1, 10.1155/2022/2073067
    11. Amal S. Hassan, Rana M. Mousa, Mahmoud H. Abu-Moussa, Analysis of Progressive Type-II Competing Risks Data, with Applications, 2022, 43, 1995-0802, 2479, 10.1134/S1995080222120149
    12. N. Balakrishnan, Erhard Cramer, Debasis Kundu, 2023, 9780123983879, 207, 10.1016/B978-0-12-398387-9.00015-5
    13. Suraj Yadav, Sanjay Kumar Singh, Arun Kaushik, Parameter estimation of Burr type-III distribution under generalized progressive hybrid censoring scheme, 2024, 2520-8756, 10.1007/s42081-024-00263-0
    14. M. Nagy, Expected Bayesian estimation based on generalized progressive hybrid censored data for Burr-XII distribution with applications, 2024, 14, 2158-3226, 10.1063/5.0184910
    15. Magdy Nagy, Mohamed Ahmed Mosilhy, Ahmed Hamdi Mansi, Mahmoud Hamed Abu-Moussa, An Analysis of Type-I Generalized Progressive Hybrid Censoring for the One Parameter Logistic-Geometry Lifetime Distribution with Applications, 2024, 13, 2075-1680, 692, 10.3390/axioms13100692
  • Reader Comments
  • © 2021 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(4420) PDF downloads(163) Cited by(5)

Figures and Tables

Figures(16)  /  Tables(2)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog