Loading [MathJax]/jax/output/SVG/jax.js

Asymptotic structure of the spectrum in a Dirichlet-strip with double periodic perforations

  • Received: 01 October 2018 Revised: 01 May 2019
  • Primary: 35B27, 35P05, 47A55, 35J25, 47A10; Secondary: 35P10, 35P15, 47A75

  • We address a spectral problem for the Dirichlet-Laplace operator in a waveguide $ \Pi^ \varepsilon $. $ \Pi^ \varepsilon$ is obtained from repsilon an unbounded two-dimensional strip $ \Pi $ which is periodically perforated by a family of holes, which are also periodically distributed along a line, the so-called "perforation string". We assume that the two periods are different, namely, $ O(1) $ and $ O( \varepsilon) $ respectively, where $ 0< \varepsilon\ll 1 $. We look at the band-gap structure of the spectrum $ \sigma^ \varepsilon $ as $ \varepsilon\to 0 $. We derive asymptotic formulas for the endpoints of the spectral bands and show that $ \sigma^ \varepsilon $ has a large number of short bands of length $ O( \varepsilon) $ which alternate with wide gaps of width $ O(1) $.

    Citation: Sergei A. Nazarov, Rafael Orive-Illera, María-Eugenia Pérez-Martínez. Asymptotic structure of the spectrum in a Dirichlet-strip with double periodic perforations[J]. Networks and Heterogeneous Media, 2019, 14(4): 733-757. doi: 10.3934/nhm.2019029

    Related Papers:

    [1] Bing Long, Zaifu Jiang . Estimation and prediction for two-parameter Pareto distribution based on progressively double Type-II hybrid censored data. AIMS Mathematics, 2023, 8(7): 15332-15351. doi: 10.3934/math.2023784
    [2] Ahmed Elshahhat, Refah Alotaibi, Mazen Nassar . Statistical inference of the Birnbaum-Saunders model using adaptive progressively hybrid censored data and its applications. AIMS Mathematics, 2024, 9(5): 11092-11121. doi: 10.3934/math.2024544
    [3] Samah M. Ahmed, Abdelfattah Mustafa . Estimation of the coefficients of variation for inverse power Lomax distribution. AIMS Mathematics, 2024, 9(12): 33423-33441. doi: 10.3934/math.20241595
    [4] Magdy Nagy, Khalaf S. Sultan, Mahmoud H. Abu-Moussa . Analysis of the generalized progressive hybrid censoring from Burr Type-Ⅻ lifetime model. AIMS Mathematics, 2021, 6(9): 9675-9704. doi: 10.3934/math.2021564
    [5] Ahmed R. El-Saeed, Ahmed T. Ramadan, Najwan Alsadat, Hanan Alohali, Ahlam H. Tolba . Analysis of progressive Type-Ⅱ censoring schemes for generalized power unit half-logistic geometric distribution. AIMS Mathematics, 2023, 8(12): 30846-30874. doi: 10.3934/math.20231577
    [6] Xue Hu, Haiping Ren . Statistical inference of the stress-strength reliability for inverse Weibull distribution under an adaptive progressive type-Ⅱ censored sample. AIMS Mathematics, 2023, 8(12): 28465-28487. doi: 10.3934/math.20231457
    [7] Heba S. Mohammed . Empirical E-Bayesian estimation for the parameter of Poisson distribution. AIMS Mathematics, 2021, 6(8): 8205-8220. doi: 10.3934/math.2021475
    [8] Amal S. Hassan, Najwan Alsadat, Oluwafemi Samson Balogun, Baria A. Helmy . Bayesian and non-Bayesian estimation of some entropy measures for a Weibull distribution. AIMS Mathematics, 2024, 9(11): 32646-32673. doi: 10.3934/math.20241563
    [9] Mazen Nassar, Refah Alotaibi, Ahmed Elshahhat . Reliability analysis at usual operating settings for Weibull Constant-stress model with improved adaptive Type-Ⅱ progressively censored samples. AIMS Mathematics, 2024, 9(7): 16931-16965. doi: 10.3934/math.2024823
    [10] Refah Alotaibi, Mazen Nassar, Zareen A. Khan, Ahmed Elshahhat . Statistical analysis of stress–strength in a newly inverted Chen model from adaptive progressive type-Ⅱ censoring and modelling on light-emitting diodes and pump motors. AIMS Mathematics, 2024, 9(12): 34311-34355. doi: 10.3934/math.20241635
  • We address a spectral problem for the Dirichlet-Laplace operator in a waveguide $ \Pi^ \varepsilon $. $ \Pi^ \varepsilon$ is obtained from repsilon an unbounded two-dimensional strip $ \Pi $ which is periodically perforated by a family of holes, which are also periodically distributed along a line, the so-called "perforation string". We assume that the two periods are different, namely, $ O(1) $ and $ O( \varepsilon) $ respectively, where $ 0< \varepsilon\ll 1 $. We look at the band-gap structure of the spectrum $ \sigma^ \varepsilon $ as $ \varepsilon\to 0 $. We derive asymptotic formulas for the endpoints of the spectral bands and show that $ \sigma^ \varepsilon $ has a large number of short bands of length $ O( \varepsilon) $ which alternate with wide gaps of width $ O(1) $.



    In the reliability and life testing experiments, censored samples are common way to save time and reduce the number of failed experimental items. Type-Ⅰ and Type-Ⅱ censoring schemes are the most common censoring schemes used in the life-testing and reliability studies. The mixture of these two schemes is called the hybrid censoring scheme. For more details about the hybrid censoring schemes, see Balakrishnan and Kundu [1]. The main disadvantage of the conventional Type-Ⅰ and Type-Ⅱ and the hybrid censoring schemes is that they do not allow the experimenter to remove the experimental items at any time point other than the terminal point. For this reason, one may use a general censoring scheme which called progressive Type-Ⅱ censoring. In this censoring scheme $ n $ items are placed on a test and the number of items to be failed, denoted by $ m $, and the number of items that are removed at each failure time, denoted $ R_i $, are determined in advance. At the time of the first failure $ x_{1:m:n} $, $ R_{1} $ items are randomly removed from the remaining $ n-1 $ surviving items. Similarly, at the time of the second failure $ x_{2:m:n} $, $ R_{2} $ items of the remaining $ n-2-R_{1} $ items are randomly removed and so on. At the time $ x_{m:m:n} $, all the remaining $ n-m-R_{1}-R_{2}-\cdot\cdot\cdot-R_{m-1} $ items are removed. For more information see Balakrishnan and Aggarwala [2] and Balakrishnan [3].

    Kundu and Joarder [4] introduced the Type-Ⅰ progressive hybrid censoring scheme by combining the concepts of progressive and hybrid censoring schemes. In this scheme, $ n $ items are placed on a test with the progressive censoring scheme $ R_{1}, R_{2}, \cdot\cdot\cdot, R_{m} $ and the experiment is terminated at $ T^{*} = min(x_{m:m:n}, T) $, where $ T $ is a predetermined time. The drawback of this scheme is that the statistical inference methods will have low efficiency or may not be applicable. Since the number of failures is random and it can be zero or a very small number. To overcome this disadvantage and increase the efficiency of the statistical inference, Ng et al. [5] proposed an adaptive Type-Ⅱ progressive hybrid censoring scheme (A-II PHCS). In the A-II PHCS, the number of failures $ m $ and the progressive censoring scheme $ R_{1}, R_{2}, \cdot\cdot\cdot, R_{m} $ are predetermined and the experimental time is allowed to run over the predetermined time $ T $ with the flexibility of changing some values of $ R_i $ during the experiment. When $ X_{m:m:m} < T $, then the experiment stops at this time and we will have the conventional progressive Type-Ⅱ censoring scheme. On the other hand, if $ X_{J:m:n} < T < X_{J+1:m:n} $, where $ X_{J:m:n} $ is the $ J^{th} $ failure time occur before the predetermined time $ T $ and $ J+1 < m $, then we adjust the progressive censoring scheme by resetting $ R_{J+1}, R_{J+2}, \cdot\cdot\cdot, R_{m-1} = 0 $ and $ R_{m} = n-m-\sum_{i = 1}^{J} R_{i} $. This adaption assures us to terminate the experiment when $ m $ is occurred, and guarantee that the total test time will not be too long away from the time $ T $.

    Recently, many authors have studied different distributions based on A-II PHCS. For example, Lin et al. [6], discussed the estimation problem of Weibull distribution. Hemmati et al. [7], discussed the maximum likelihood and approximate maximum likelihood estimation for the log-normal distribution. Mahmoud et al. [8], studied the Bayes estimation of Pareto distribution. Ismail [9], investigated the estimation of Weibull distribution and the acceleration factor under step-stress partially accelerated life test model. AL Sobhi and Soliman [10], studied the estimation of parameters, reliability and hazard functions of the exponentiated Weibull distribution. Nassar and Abu-Kasem [11] and Nassar et al. [12] investigated the estimation problems of inverse Weibull and Weibull distributions, respectively.

    Burr [13] introduced the Burr type-XII distribution as a member of a 12 types of cumulative distribution functions. The Burr type-XII distribution has many applications in various fields including probability theory, reliability, failure time modeling and household income. A random variable $ X $ is said to have two-parameters Burr type-XII distribution, denoted by Burr$ (a, b) $, if its probability density and reliability functions are given, respectively, by

    $ f(x)=abxa1(1+xa)(b+1),x>0,a,b>0, $ (1.1)

    and

    $ R(x)=(1+xa)b,x>0, $ (1.2)

    where $ a $ and $ b $ are shape parameters. The Lomax distribution can be obtained as a special case from Burr$ (a, b) $ distribution by stting $ a = 1 $. Also, when $ b = 1 $, the Burr$ (a, b) $ distribution reduces to the Champernowne distribution. Evans and Ragab [14] discussed the Bayesian inferences from Burr type-XII distribution based on Type-Ⅱ censored scheme. Moore and Papadopoulos [15] used the Bayesian estimation method to estimate the Burr type-XII distribution parameters using three different loss functions. Mousa and Jaheen [16] obtained the Bayesian estimators of the Burr type-XII distribution parameters under progressive Type-Ⅱ censored scheme. Jaheen and Okasha [17] discussed the estimation problem of the Burr type-XII distribution using Bayesian and E-Bayesian estimation using Type-Ⅱ censored scheme. Hanieh and Abdolreza [18] presented the statistical inference and prediction of the Burr type-XII distribution under unified hybrid scheme. See also, Montaser [19], Jia et al. [20] and Arabi and Noori [21].

    To the best of our knowledge, the E-Bayesian estimation of the Burr$ (a, b) $ distribution under A-II PHCS has not yet been studied. The main aim of this paper is to investigate the E-Bayesian estimation of the parameter $ b $ and the reliability function of the Burr$ (a, b) $ distribution under A-II PHCS with the assumption that the parameter $ a $ is known. The maximum likelihood method, Bayesian and E-Bayesian estimations are considered using three different prior distributions. The Bayesian and E-Bayesian estimations are discussed based on squared error (SE) and LINEX loss functions. The E-Bayesian properties are studied and the E-Posterior risk is also obtained. A simulation study is conducted to compare the performance of the different estimators of the parameter $ b $ and the reliability function. Application to a real data shows that the E-Bayesian estimators perform better than the maximum likelihood and Bayesian estimators.

    The rest of this paper is organized as follows: In Section (2), we obtain the maximum likelihood and Bayesian estimations of the parameter $ b $ and the reliability function of Burr$ (a, b) $. The E-Bayesian estimation is considered in Section (3). In Section (4), we study the properties of the E-Bayesian estimation. The E-posterior risk of the E-Bayesian estimation is obtained in Section (5). A simulation study is performed in Section (6). A real data is analyzed in Section (7). Finally, the paper is concluded in Section (8).

    Based on adaptive Type-Ⅱ progressive hybrid censoring sample of size $ m $ obtained from a life test experiment of $ n $ items from the Burr$ (a, b) $ distribution, we can write the likelihood function as follows

    $ L(a,bx_)ambmψ(a;x_)ebP, $ (2.1)

    where

    $ x_=(x1,x2,,xm),ψ(a;x_)=mi=1(xa1i1+xai), $
    $ PP(a;x_)=mi=1ln(1+xai)+Ji=1Riln(1+xai)+Rln(1+xam), $

    where $ R^* = n-m-\sum_{i = 1}^{J}R_i $, and $ x_i = x_{i:m:n} $ for simplicity of notation. Assuming that the parameter $ a $ is known, then the maximum likelihood estimate (MLE) of the parameter $ b $ is obtained as follows

    $ ˆbML=mP. $ (2.2)

    From (2.2) and the invariance property of the maximum likelihood, we can obtain the MLE of the reliability function $ R\left(x\right) $ by replacing $ b $ by its MLE in (1.2).

    In the Bayesian estimation, we assume that the parameter $ a $ is known and the parameter $ b $ follows the gamma conjugate prior distribution as proposed by Papadopoulos [15] in the following form

    $ g(b)=θαΓ(α)bα1ebθ,b>0, $ (2.3)

    where $ \alpha > 0 $ and $ \theta > 0 $. From (2.1) and (2.3), the posterior density of $ b $ given $ \underline {\rm{x}} $ can be written as

    $ q(bx_)=Abm+α1e(θ+P)b,b>0, $ (2.4)

    where

    $ A=(θ+P)m+αΓ(m+α). $ (2.5)

    To obtain the Bayes estimate of $ b $, we consider two types of loss functions. The first loss function is the SE loss function and the Bayes estimate in this case is the posterior mean. The second one is the LINEX loss function with the Bayes estimate obtained as

    $ ˆbBL=1zlnE(ezb),z0. $ (2.6)

    Based on the SE loss function, we can obtain the Bayes estimate of $ b $ as follows

    $ ˆbBS(α,θ)=m+αθ+P, $ (2.7)

    while the Bayes estimate of $ b $ using the LINEX loss function can be obtained from (2.4) and (2.6) by

    $ ˆbBL(α,θ)=m+αzln(1+zθ+P). $ (2.8)

    Under the SE loss function, the Bayes estimate of the reliability function can be obtained from (1.2) and (2.4) as

    $ ˆRBS(x)=(θ+Pθ+P+P)m+α, $ (2.9)

    where

    $ P=ln(1+xα). $

    Similarly, from (1.2), (2.4) and (2.6), the Bayes estimate of the reliability function under LINEX loss function can be obtained as follows

    $ ˆRBL(x)=1z[lnE(ezR(t))]=1zln[(θ+P)m+αΓ(m+α)0ezeθPbm+α1e(θ+P)bdb]=1zln[i=0(z)iΓ(i)(θ+Pθ+P+iP)m+α]. $ (2.10)

    Han [22] introduced the E-Bayesian (Expected Bayesian) estimation to obtain the estimate of the scale parameter of the exponential distribution based on SE loss function. He also derived the properties of the E-Bayesian estimation. For more relevant research about the E-Bayesian estimation, see Han [22,23], Okasha and Wang [24], Azimi et al. [25], Okasha [26,27,28], and Abdallah and Junping [29]. Han [22] stated that the prior distribution of $ \alpha $ and $ \theta $ should be determined to ensure that the prior distribution $ g(b) $ is a decreasing function in $ b $. To make sure this condition is satisfied, we obtain the first derivative of $ g(b) $ with respect to $ b $ as

    $ dg(b)db=θαΓ(α)bα2ebθ[(α1)bθ], $

    where $ \alpha, \theta, b > 0 $. It is noted that when $ 0 < \alpha < 1 $ and $ \theta > 0 $ the function $ \frac{dg(b)}{b} < 0, $ and therefore $ g(b) $ is a decreasing function of $ b $. Suppose that $ \alpha $ and $ \theta $ are independent and have the bivariate density function

    $ π(α,θ)=π1(α)π2(θ), $

    then, according to Han [30] the E-Bayesian estimate of the parameter $ b $ (expectation of the Bayesian estimate of $ b $) can be obtained as follows

    $ ˆbEB=DˆbBS(α,θ)π(α,θ)dαdθ, $ (3.1)

    where $ \hat{b}_{EB}(\alpha, \theta) $ is the Bayes estimate of $ b $ under any loss function. For more details about E-Bayesian estimation, see Han [22], Jaheen and Okasha [17] and Okasha [26,27,28].

    Here, we obtain the E-Bayesian estimates of the parameter $ b $ by considering three different prior distributions of the hyper-parameters $ \alpha $ and $ \theta $. These prior distributions are selected to show the effect of the different prior distributions on the E-Bayesian estimation of the parameter $ b $. The selected priors distributions are given by

    $ π1(α,θ)=1cB(u,v)αu1(1α)v1,0<α<1,0<θ<cπ2(α,θ)=2c2B(u,v)(cθ)αu1(1α)v1,0<α<1,0<θ<cπ3(α,θ)=2θc2B(u,v)αu1(1α)v1,0<α<1,0<θ<c}, $ (3.2)

    where $ B(u, v) $ is the beta function. These prior distributions are used by Zeinhum and Okasha [17] to guarantee that $ g(b) $ is a decreasing function in $ b $. Now, the E-Bayesian estimates of the parameter $ b $ under SE loss function can be obtained from (2.7), (3.1) and (3.2). Using the prior distribution $ \pi_1(\alpha, \theta) $, the E-Bayesian estimate of $ b $ under SE loss function is given by

    $ ˆbEBS1=DˆbBS(α,θ)π1(α,θ)dθdα=1cB(u,v)10c0(m+αθ+P)αu1(1α)v1dθdα=1c(m+uu+v)ln(1+cP). $ (3.3)

    Using the same approach, the E-Bayesian estimates of $ b $ based on $ \pi_2(\alpha, \theta) $ and $ \pi_3(\alpha, \theta) $ are given, respectively, by

    $ ˆbEBS2=2c(m+uu+v)[(1+Pc)ln(1+cP)1], $ (3.4)

    and

    $ ˆbEBS3=2c(m+uu+v)[1Pcln(1+cP)]. $ (3.5)

    The E-Bayesian estimation of $ b $ under LINEX loss function can be obtained by using the different prior distributions of the hyperparameters given in (3.2). For the prior distribution $ \pi_1(\alpha, \theta) $ and based on (2.8) and (3.1), the E-Bayesian estimate of $ b $ is obtained as

    $ ˆbEBL1=DˆbBL(α,θ)π1(α,θ)dθdα=1czB(u,v)10c0(m+α)αu1(1α)v1ln(1+zθ+P)dθdα=1cz(m+uu+v)[cln(1+zc+P)+(P+z)ln(1+cP+z)Pln(1+cP)]. $ (3.6)

    Similarly, the E-Bayesian estimates of $ b $ using $ \pi_2(\alpha, \theta) $ and $ \pi_3(\alpha, \theta) $ are given, respectively, by

    $ ˆbEBL2=(m+uu+v)[1zln(1+zP)(P+c)2c2zln(1+cP)+(P+z+c)2c2z×ln(1+cP+z)1c] $ (3.7)

    and

    $ ˆbEBL3=(m+uu+v)[1zln(1+zc+P)+P2c2zln(1+cP)(P+z)2c2zln(1+cP+z)+1c]. $ (3.8)

    Based on the SE loss function, the E-Bayesian estimates of the reliability function can be derived by using the three different prior distributions of the hyper-parameters given by (3.2). For the first prior distribution $ \pi_1(\alpha, \theta) $, the E-Bayesian estimate of the reliability function is obtained from (2.9) and (3.1) as

    $ ˆREBS1=DˆRBS(t)π1(α,θ)dθdα=1cB(u,v)10c0(θ+Pθ+P+P)m+ααu1(1α)v1dθdα.=1cB(u,v)c0(1+Pθ+P)m(10eαln(θ+Pθ+P+P)αu1(1α)v1dα)dθ.=1cc0(1+Pθ+P)mF1:1(u,u+v;ln(θ+Pθ+P+P))dθ, $ (3.9)

    where $ F_{1:1}\left(., .;.\right) $ is the generalized hypergeometric function. See for more details Gradshteyn and Ryzhik [31]. Similarly, the E-Bayesian estimates of the reliability function based on the prior distributions 2 and 3 are given, respectively, by

    $ ˆREBS2=2c2c0(cθ)(1+Pθ+P)mF1:1(u,u+v;ln(θ+Pθ+P+P))dθ, $ (3.10)

    and

    $ ˆREBS3=2c2c0θ(1+Pθ+P)mF1:1(u,u+v;ln(θ+Pθ+P+P))dθ, $ (3.11)

    The integrals in (3.9), (3.10) and (3.11) can not be computed in a simple closed forms. Therefore, a numerical techniques should be used to obtain the E-Bayesian estimates of the reliability functions based on the SE loss function using the different prior distributions.

    Under LINEX loss function, the E-Bayesian estimates of the reliability function using the prior distribution $ \pi_i(\alpha, \theta), i = 1, 2, 3 $, can be obtained from (2.10) and (3.1) as

    $ ˆREBLi=DˆRBL(t)πi(α,θ)dθdα. $ (3.12)

    The integrals in (3.12) are very complicated to obtain, so a numerical computations are used to obtain the E-Bayesian estimates of the reliability function under LINEX loss function.

    In this section, we investigate the relations among the different E-Bayesian estimates of the parameter $ b $ and the reliability function based on the SE loss function in terms of biases $ Bi(b_{EBSi}) $ and $ Bi(R_{EBSi}) $, $ i = 1, 2, 3 $. Moreover, we discuss the relations between the different E-Bayesian estimates of $ b $ and the reliability function under the LINEX loss function through the relations between biases, $ Bi(b_{EBli}) $ and $ Bi(R_{EBLi}) $.

    The relations between $ Bi(b_{EBSi}) $, $ Bi(b_{EBli}) $, $ Bi(R_{EBSi}) $ and $ Bi(R_{EBli}) $, $ i = 1, 2, 3 $ are described in the following theorems:

    Theorem 1. Let $ y = \frac{c}{P} $, $ c > 0 $, $ 0 < \frac{c}{P} < 1 $, and $ b_{EBSi} $ are given by (3.3), (3.4) and (3.5).Then we have the following conclusions:

    (i) $ Bi(b_{EBS2}) < Bi(b_{EBS1}) < Bi(b_{EBS3}) $,

    (ii) $ \lim_{y\rightarrow 0} Bi(b_{EBS1}) = \lim_{y\rightarrow 0} Bi(b_{EBS2}) = \lim_{y\rightarrow 0} Bi(b_{EBS3}) $.

    Proof. (i) From (3.3), (3.4) and (3.5), we have

    $ Bi(bEBS1)Bi(bEBS2)=Bi(bEBS3)Bi(bEBS1)=1c(m+uu+v)[c+2Pcln(1+cP)2] $ (4.1)

    For $ -1 < x < 1 $, we have: $ \ln(1+x) = x-\frac{x^2}{2}+\frac{x^3}{3}-\frac{x^4}{4}+\ldots = \sum_{k = 1}^\infty (-1)^{k-1}\frac{x^k}{k}. $ Let $ y = \frac{c}{P} $, when $ c > 0 $, $ 0 < \frac{c}{P} < 1 $, we have:

    $ [(1+2y1)ln(1+y)2]=(1+2y1)(yy22+y33y44+y55)2=(yy22+y33y44+y55)+(2y+2y232y34+2y45)2=(y26y36)+(3y462y515)+=y26(1y)+y460(98y)+. $ (4.2)

    From (4.1) and (4.2), we have

    $ Bi(b_{EBS1})-Bi(b_{EBS2}) = Bi(b_{EBS3})-Bi(b_{EBS1}) > 0, $

    that is

    $ Bi(b_{EBS2}) < Bi(b_{EBS1}) < Bi(b_{EBS3}). $

    (ii) From (4.1) and (4.2), we get

    $ limy0(Bi(bEBS1)Bi(bEBS2))=limy0(Bi(bEBS3)Bi(bEBS1))=1c(m+uu+v)limy0{y26(1y)+y460(98y)+}=0. $

    That is, $ \lim_{y\rightarrow 0}Bi(b_{EBS1}) = \lim_{y\rightarrow 0}Bi(b_{EBS2}) = \lim_{y\rightarrow 0} Bi(b_{EBS3}) $.

    Theorem 2. Let $ c > 0 $, $ 0 < \frac{c}{P} < 1 $, and $ b_{EBLi} $ are given by (3.6), (3.7) and (3.8).Then we have the following conclusions:

    (i) $ Bi(b_{EBL2}) < Bi(b_{EBL1}) < Bi(b_{EBL3}), $

    (ii) $ \lim_{P\rightarrow\infty}Bi(b_{EBL1}) = \lim_{P\rightarrow\infty} Bi(b_{EBL2}) = \lim_{P\rightarrow\infty} Bi(b_{EBL3}) $,

    Proof. (i) From (3.6), (3.7) and (3.8), we have

    $ Bi(bEBL1)Bi(bEBL2)=Bi(bEBL3)Bi(bEBL1)=1c2z(m+uu+v)[(P+z)(P+z+c)ln(1+cP+z)P(P+c)ln(1+cP)cz]. $ (4.3)

    Since,

    $ [(P+z)(P+z+c)ln(1+cP+z)P(P+c)ln(1+cP)cz]=z+(P+c2c26P+c312P2c420P3+)[P+z+c2c2(P+z)(12(1+cP+z)[13c4(P+z)+c25(P+z)])]=c2P+z(12(1+cP+z)(13c4(P+z)+c25(P+z)))(c26Pc312P2+c420P3). $ (4.4)

    From (4.3) and (4.4), when $ c > 0 $, $ 0 < \frac{c}{P} < 1 $, we have:

    $ Bi(b_{EBL1})-Bi(b_{EBL2}) = Bi(b_{EBL3})-Bi(b_{EBL1}) > 0, $

    that is

    $ Bi(b_{EBL2}) < Bi(b_{EBL1}) < Bi(b_{EBL3}). $

    (ii) From (4.3) and (4.4) we get

    $ limP(Bi(bEBL1)Bi(bEBL2))=limP(Bi(bEBL3)Bi(bEBL1))=1cz(m+uu+v)(limPc2P+s(12(1+cP+z)(13c4(P+z)+c25(P+z))))1cz(m+uu+v)limP(c26Pc312P2+c420P3).=0. $ (4.5)

    That is, $ \lim_{P\rightarrow\infty} Bi(b_{EBL1}) = \lim_{P\rightarrow\infty}Bi(b_{EBL2}) = \lim_{P\rightarrow\infty}Bi(b_{EBL3}) $.

    Theorem 3. Let $ c > 0 $, $ 0 < \frac{c}{P} < 1 $, and $ R_{EBSi} $ are given by (3.9), (3.10) and (3.11).Then we have the following conclusions:

    (i) $ Bi(R_{EBS2}) < Bi(R_{EBS1}) < Bi(R_{EBS3}) $,

    (ii) $ \lim_{P\rightarrow\infty} Bi(R_{EBS1}) = \lim_{P\rightarrow\infty}Bi(R_{EBS2}) = \lim_{P\rightarrow\infty}Bi(R_{EBS3}) $

    Proof. (i) From (3.9), (3.10) and (3.11), we have

    $ g(c)=Bi(REBS1)Bi(REBS2)=Bi(REBS3)Bi(REBS1)=10c0(2θc)(θ+Pθ+P+P)m+ααu1(1α)v1c2B(u,v)dθdα $ (4.6)

    For $ \alpha\in (0, 1) $, $ \theta\in (0, c) $, $ (2\theta-c)\left(\frac{\theta+P}{\theta+P+P^*}\right)^{m+\alpha} $ and $ \frac{\alpha^{u-1}(1-\alpha)^{v-1}}{c^2 B(u, v)} $ are continuous functions and $ \frac{\alpha^{u-1}(1-\alpha)^{v-1}}{c^2 B(u, v)} > 0 $, by the generalized mean value theorem for definite integrals, there is at least one number $ \alpha_0\in (0, 1) $ and $ \theta_0\in (0, c) $ such that

    $ g(c)=(2θ0c)(θ0+Pθ0+P+P)m+α010c0αu1(1α)v1c2B(u,v)dθdα=2θ0cc(θ0+Pθ0+P+P)m+α0>0. $ (4.7)

    Therefore, we obtain that

    $ Bi(R_{EBS1})-Bi(R_{EBS2}) = Bi(R_{EBS3})-Bi(R_{EBS1}) > 0, $

    that is

    $ Bi(R_{EBS2}) < Bi(R_{EBS1}) < Bi(R_{EBS3}). $

    (ii) From (4.6) we get

    $ limP(Bi(REBS1)Bi(REBS2))=limP(Bi(REBS3)Bi(REBS1))=1c2B(u,v)limP10c0(2θc)(θ+Pθ+P+P)m+ααu1(1α)v1dθdα=1c2B(u,v)10c0(2θc)(limP(θ+Pθ+P+P)m+α)αu1(1α)v1dθdα=0. $ (4.8)

    That is, $ \lim_{P\rightarrow\infty} Bi(R_{EBS1}) = \lim_{P\rightarrow\infty}Bi(R_{EBS2}) = \lim_{P\rightarrow\infty}Bi(R_{EBS3}) $.

    Theorem 4. Let $ c > 0 $, $ 0 < \frac{c}{P} < 1 $, and $ R_{EBLi} $ are given by (3.12). Then we have the following conclusions:

    (i) $ Bi(R_{EBL3}) < Bi(R_{EBL2}) < Bi(R_{EBL1}) $.

    (ii) $ \lim_{P\rightarrow\infty} Bi(R_{EBL1}) = \lim_{P\rightarrow\infty}Bi(R_{EBL2}) = \lim_{P\rightarrow\infty}Bi(R_{EBL3}) $

    Proof. (i) From (3.2) and (3.12), we have

    $ f(c)=Bi(REBL2)Bi(REBL3)=Bi(REBL1)Bi(REBL2)=10c0Bi(RBL(t))(π1(α,θ)π2(α,θ))dθdα=10c0Bi(RBL(t))(π3(α,θ)π1(α,θ))dθdα=1c2B(u,v)10c0(2θc)ln(i=0(δ)iΓ(i)(θ+Pθ+P+iP)m+α)αu1(1α)v1dθdα. $ (4.9)

    For $ \alpha\in (0, 1) $, $ \theta\in (0, c) $, $ (2\theta-c)\ln\left(\sum_{i = 0}^{\infty}\frac{(-\delta)^i}{\Gamma(i)} \left(\frac{\theta+P}{\theta+P+iP^*}\right)^{m+\alpha}\right) $ and $ \frac{\alpha^{u-1}(1-\alpha)^{v-1}}{c^2 B(u, v)} $ are continuous functions and $ \frac{\alpha^{u-1}(1-\alpha)^{v-1}}{c^2 B(u, v)} > 0 $, by the generalized mean value theorem for definite integrals, there is at least one number $ \alpha_1\in (0, 1) $ and $ \theta_1\in (0, c) $ such that

    $ f(c)=(2θ1c)ln(i=0(δ)iΓ(i)(θ1+Pθ1+P+iP)m+α1)10c0αu1(1α)v1c2B(u,v)dθdα=2θ1ccln(i=0(δ)iΓ(i)(θ1+Pθ1+P+iP)m+α1)>0. $ (4.10)

    Therefore, we obtain that

    $ Bi(R_{EBL2})-Bi(R_{EBL3}) = Bi(R_{EBL1})-Bi(R_{EBL2}) > 0, $

    that is

    $ Bi(R_{EBL3}) < Bi(R_{EBL2}) < Bi(R_{EBL1}). $

    (ii) From (4.9) we get

    $ limP(Bi(REBL2)Bi(REBL3))=limP(Bi(REBL1)Bi(REBL2))=10c0limPBi(RBL(t))(π1(α,θ)π2(α,θ))dθdα=10c0limPBi(RBL(t))(π3(α,θ)π1(α,θ))dθdα=1c2B(u,v)limP10c0(2θc)ln[i=0(δ)iΓ(i)(θ+Pθ+P+iP)m+α]αu1(1α)v1dθdα=1c2B(u,v)10c0(2θc)(limPln(i=0(δ)iΓ(i)(θ+Pθ+P+iP)m+α))αu1(1α)v1dθdα=0. $ (4.11)

    That is, $ \lim_{P\rightarrow\infty} Bi(R_{EBL1}) = \lim_{P\rightarrow\infty}Bi(R_{EBL2}) = \lim_{P\rightarrow\infty}Bi(R_{EBL3}) $.

    In this section, we derive the E-posterior risk of the E-Bayesian estimations of the parameter $ b $ using the three different prior distributions in (3.2) under SE and LINEX loss functions.

    Let $ \ell(\hat{\mu}, \mu) $ be any loss function where $ \hat{\mu} $ is the Bayes estimator of $ \mu $, and $ q(\mu|\underline{x}) $ is the posterior distribution of $ \mu $, then the posterior risk (PR) for the Bayesian estimation is

    $ PR=(ˆμ,μ)q(μ|x_)dμ. $ (5.1)

    Under SE loss function the PR of the Bayes estimation is the posterior variance. From the posterior distribution in (2.4) we can obtain the PR of Bayesian estimation of $ b $ as follows

    $ PRBS=A0bm+α+1e(θ+P)bdb(m+αθ+P)2=m+α(θ+P)2. $ (5.2)

    According to Han [23] the E-posterior risk of the E-Bayesian estimation can be obtained as

    $ PREBS=DPRBSπ(α,θ)dθdα, $ (5.3)

    where $ PR_{BS} $ is the posterior risk defined in (5.2) and $ \pi(\alpha, \theta) $ is the prior distribution. Now, from (3.2), (5.2) and (5.3) we can obtain the E-posterior risk of the E-Bayesian estimation under SE loss function as follow

    (i) The E-posterior risk of the E-Bayesian estimation of $ \hat{b}_{EBS1} $ is

    $ PREBS1=1cB(u,v)10c0m+α(θ+P)2αu1(1α)v1dθdα=1c2(m+uu+v)(cPcP+c). $

    (ii) The E-posterior risk of the E-Bayesian estimation of $ \hat{b}_{EBS2} $ is

    $ PREBS2=2c2B(u,v)10c0m+α(θ+P)2(cθ)αu1(1α)v1dθdα=2c2(m+uu+v)(1+ln(P)+cPcP+cPP+cln(T+c)). $

    (iii) The E-posterior risk of the E-Bayesian estimation of $ \hat{b}_{EBS3} $ is

    $ PREBS3=2c2B(u,v)10c0m+α(θ+P)2θαu1(1α)v1dθdα=2c2(m+uu+v)(ln(P+c)+PP+cln(P)1). $

    Using the same approach in the previous subsection we can obtain the posterior risk of the Bayes estimate of $ b $ under LINEX loss function as follows

    $ PRBL=0(ez(ˆbBLb)z(ˆbBLb)1)bm+α1e(θ+T)bdb=(ezˆbBLE(ezb)zˆbBL+zE(b)1)=(ezˆbBLezˆbBLzˆbBL+zˆbBS1)=z{m+αθ+Pm+αzln(1+zθ+P)} $ (5.4)

    From (3.2), (5.3) and (5.4), the E-posterior risk of the E-Bayesian estimation of $ b $ under LINEX loss function can be obtained as follow

    (i) The E-posterior risk of the E-Bayesian estimation of $ \hat{b}_{EBL1} $ is

    $ PREBL1=zcB(u,v)10c0{m+αθ+Tm+αzln(1+zθ+T)}αu1(1α)v1dθdα=(m+uu+v)[P+zc(ln(1+cP)ln(1+cP+z))ln(1+zc+P)]. $

    (ii) The E-posterior risk of the E-Bayesian estimation of $ \hat{b}_{EBL2} $ is

    $ PREBL2=2zc2B(u,v)10c0{m+αθ+Tm+αzln(1+zθ+T)}(cθ)αu1(1α)v1dθdα=(m+uu+v)[(c+P)2+2z(c+P)c2ln(1+cP)ln(1+zP)(c+z+P)2c2ln(1+cz+P)zc]. $

    (iii) The E-posterior risk of the E-Bayesian estimation of $ \hat{b}_{EBL3} $ is

    $ PREBL3=2zc2B(u,v)10c0{m+αθ+Tm+αzln(1+zθ+T)}θαu1(1α)v1dθdα=1c2(m+uu+v)[(P+z)2ln(1+cz+P)c2ln(1+zc+P)(P2+2zP)ln(1+cP)+zs]. $

    In this section we compare the different estimators of the parameter $ a $ and the reliability function by conducting a simulation study. We compare the performance of the different estimators in terms of their biases and mean square errors (MSE). We consider different values of $ n $, $ m $, $ T $ and the following three censoring schemes (Sch)

    ● Sch 1: $ R_{1} = \cdot\cdot\cdot = R_{m-1} = 0 \, \, and\, \, R_{m} = n-m $.

    ● Sch 2: $ R_{1} = \cdot\cdot\cdot = R_{m-1} = 1 \, \, and\, \, R_{m} = n-2m+1 $.

    ● Sch 3: $ R_{1} = \cdot\cdot\cdot = R_{m} = \frac{n-m}{m} $.

    In all the setting we choose the parameter $ a $ to be one and consider $ b = (0.5, 1.5) $. The simulation is conducted according to the following steps:

    (i) Determine $ n, m, R_{i}'s, T $ and the value of the parameter $ b $.

    (ii) Generate the conventional progressive Type-Ⅱ censored sample from Burr type-XII model acording to the method proposed by Balakrishnan and Sandhu [32], by using $ X = [(1-U)^{-1/b}-1]^{1/a} $, where $ U $ is uniform $ (0, 1) $.

    (iii) Determine the value of $ J $, and withdraw all the observations greater than the $ J-th $ observation.

    (iv) Generate $ (m-J-1) $ Type-Ⅱ censored sample from $ f(x)/[1-F(x_{J+1})] $ and stop the experiment at $ x_{m} $. Therefore, $ X = \left\{\left[\frac{1-U}{(1+x_{J+1}^a)^b}\right]^{-1/b}-1\right\}^{1/a} $.

    (v) Obtain the the different estimates of the parameter $ b $ and the reliability function at time $ x = 0.75 $.

    (vi) Repeat steps 2–5, 1000 times.

    (vii) Obtain the average values of biases and MSEs (for the reliability function we obtain the MSE only).

    To obtain the Bayesian and E-Bayesian estimates of the parameter $ b $, we choose the hyperparameters values to be $ \alpha = 0.5 $ and $ \theta = 1 $ for $ b = 0.5 $ and $ \alpha = 0.75 $ and $ \theta = 0.5 $ for $ b = 1.5 $. These values are selected to make the prior means are same as the original means. The Bayesian and and E-Bayesian estimates using the LINEX loss function are obtained by setting $ z = -3 $ in all the cases. The values of average biases and average MSEs for the parameter $ b = 0.5 $ are shown in Table 1 and in Table 3 for $ b = 1.5 $. The average values of MSEs for the different estimates of the reliability function are displayed in Table 2 for $ b = 0.5 $ and in Table 4 for $ b = 1.5 $.

    Table 1.  Average values of bias (first row) and MSE (second row) of the different estimates for b = 0.5 under different censoring schemes.
    $ (n, m) $ Sch $ \hat{b}_{ML} $ $ \hat{b}_{BS} $ $ \hat{b}_{EBS1} $ $ \hat{b}_{EBS2} $ $ \hat{b}_{EBS3} $ $ \hat{b}_{BL} $ $ \hat{b}_{EBL1} $ $ \hat{b}_{EBL2} $ $ \hat{b}_{EBL3} $
    $ T=0.3 $
    (30, 5) 1 0.2367 0.2254 0.2142 0.2130 0.2155 0.2014 0.1880 0.1865 0.1896
    0.0590 0.0538 0.0494 0.0489 0.0499 0.0447 0.0402 0.0397 0.0407
    2 0.2286 0.2173 0.2055 0.2042 0.2069 0.1920 0.1778 0.1762 0.1795
    0.0550 0.0499 0.0454 0.0449 0.0459 0.0406 0.0361 0.0356 0.0366
    3 0.1930 0.1821 0.1674 0.1657 0.1691 0.1503 0.1324 0.1303 0.1345
    0.0387 0.0346 0.0297 0.0292 0.0303 0.0246 0.0200 0.0195 0.0205
    (30, 10) 1 0.1375 0.1332 0.1229 0.1218 0.1241 0.1116 0.0999 0.0986 0.1012
    0.0247 0.0233 0.0213 0.0211 0.0215 0.0194 0.0179 0.0177 0.0180
    2 0.1090 0.1054 0.0943 0.0927 0.0959 0.0803 0.0678 0.0659 0.0696
    0.0172 0.0161 0.0145 0.0143 0.0148 0.0130 0.0119 0.0118 0.0121
    3 0.0777 0.0750 0.0622 0.0603 0.0641 0.0460 0.0314 0.0292 0.0335
    0.0108 0.0100 0.0089 0.0087 0.0090 0.0079 0.0076 0.0076 0.0076
    (60, 5) 1 0.3224 0.3115 0.3068 0.3061 0.3075 0.3009 0.2956 0.2948 0.2964
    0.1046 0.0977 0.0948 0.0944 0.0952 0.0913 0.0882 0.0878 0.0887
    2 0.3187 0.3077 0.3027 0.3020 0.3035 0.2966 0.2911 0.2903 0.2919
    0.1021 0.0952 0.0923 0.0919 0.0927 0.0887 0.0855 0.0851 0.0860
    3 0.2801 0.2684 0.2613 0.2603 0.2624 0.2523 0.2441 0.2429 0.2453
    0.0788 0.0724 0.0687 0.0682 0.0692 0.0641 0.0602 0.0596 0.0607
    (60, 20) 1 0.1346 0.1323 0.1272 0.1266 0.1277 0.1218 0.1164 0.1157 0.1170
    0.0210 0.0203 0.0192 0.0190 0.0193 0.0180 0.0169 0.0168 0.0170
    2 0.1004 0.0985 0.0924 0.0917 0.0931 0.0860 0.0795 0.0788 0.0803
    0.0129 0.0124 0.0114 0.0113 0.0116 0.0105 0.0096 0.0095 0.0097
    3 0.0628 0.0616 0.0544 0.0536 0.0552 0.0467 0.0390 0.0381 0.0398
    0.0065 0.0062 0.0056 0.0055 0.0056 0.0050 0.0045 0.0045 0.0046
    $ T=0.6 $
    (30, 5) 1 0.3043 0.2930 0.2868 0.2861 0.2875 0.2800 0.2730 0.2722 0.2738
    0.0935 0.0868 0.0834 0.0830 0.0838 0.0797 0.0760 0.0755 0.0764
    2 0.2927 0.2812 0.2743 0.2735 0.2751 0.2667 0.2588 0.2579 0.2597
    0.0866 0.0800 0.0763 0.0759 0.0767 0.0723 0.0684 0.0679 0.0688
    3 0.2416 0.2298 0.2194 0.2182 0.2206 0.2075 0.1952 0.1938 0.1967
    0.0588 0.0532 0.0486 0.0481 0.0491 0.0436 0.0388 0.0382 0.0393
    (30, 10) 1 0.1993 0.1937 0.1867 0.1859 0.1875 0.1791 0.1713 0.1705 0.1722
    0.0419 0.0396 0.0372 0.0369 0.0374 0.0346 0.0322 0.0319 0.0324
    2 0.1592 0.1542 0.1453 0.1443 0.1463 0.1356 0.1256 0.1245 0.1268
    0.0272 0.0255 0.0230 0.0228 0.0233 0.0205 0.0182 0.0179 0.0184
    3 0.1140 0.1099 0.0986 0.0973 0.0999 0.0861 0.0734 0.0719 0.0748
    0.0143 0.0133 0.0111 0.0109 0.0113 0.0090 0.0072 0.0070 0.0074
    (60, 5) 1 0.3693 0.3600 0.3572 0.3569 0.3575 0.3541 0.3510 0.3507 0.3514
    0.1371 0.1304 0.1285 0.1282 0.1287 0.1264 0.1243 0.1241 0.1245
    2 0.3675 0.3581 0.3552 0.3549 0.3556 0.3521 0.3490 0.3486 0.3493
    0.1357 0.1289 0.1269 0.1267 0.1271 0.1248 0.1227 0.1224 0.1229
    3 0.3203 0.3092 0.3041 0.3035 0.3047 0.2985 0.2927 0.2921 0.2934
    0.1028 0.0958 0.0927 0.0923 0.0930 0.0893 0.0860 0.0856 0.0864
    (60, 20) 1 0.1935 0.1907 0.1871 0.1867 0.1875 0.1833 0.1796 0.1792 0.1800
    0.0387 0.0376 0.0363 0.0362 0.0365 0.0350 0.0337 0.0336 0.0339
    2 0.1484 0.1459 0.1412 0.1406 0.1417 0.1363 0.1313 0.1307 0.1319
    0.0231 0.0224 0.0211 0.0209 0.0212 0.0198 0.0185 0.0184 0.0187
    3 0.0962 0.0943 0.0882 0.0875 0.0889 0.0817 0.0752 0.0745 0.0759
    0.0101 0.0097 0.0086 0.0085 0.0088 0.0076 0.0066 0.0065 0.0067

     | Show Table
    DownLoad: CSV
    Table 2.  Average values of MSE of the different estimates of the reliability function for $ b = 0.5 $ under different censoring schemes.
    $ (n, m) $ Sch $ \hat{R}_{ML} $ $ \hat{R}_{BS} $ $ \hat{R}_{EBS1} $ $ \hat{R}_{EBS2} $ $ \hat{R}_{EBS3} $ $ \hat{R}_{BL} $ $ \hat{R}_{EBL1} $ $ \hat{R}_{EBL2} $ $ \hat{R}_{EBL3} $
    $ T=0.3 $
    (30, 5) 1 0.0123 0.0115 0.0105 0.0104 0.0106 0.0123 0.0114 0.0113 0.0101
    2 0.0114 0.0106 0.0097 0.0095 0.0098 0.0115 0.0105 0.0104 0.0091
    3 0.0078 0.0073 0.0063 0.0062 0.0064 0.0082 0.0072 0.0071 0.0060
    (30, 10) 1 0.0049 0.0048 0.0044 0.0043 0.0044 0.0052 0.0048 0.0047 0.0043
    2 0.0034 0.0033 0.0030 0.0029 0.0030 0.0037 0.0033 0.0032 0.0028
    3 0.0021 0.0020 0.0018 0.0017 0.0018 0.0023 0.0020 0.0020 0.0015
    (60, 5) 1 0.0225 0.0212 0.0205 0.0204 0.0206 0.0218 0.0212 0.0211 0.0200
    2 0.0220 0.0206 0.0199 0.0198 0.0200 0.0213 0.0206 0.0205 0.0195
    3 0.0166 0.0154 0.0146 0.0145 0.0147 0.0163 0.0155 0.0153 0.0143
    (60, 20) 1 0.0041 0.0041 0.0038 0.0038 0.0039 0.0043 0.0041 0.0040 0.0034
    2 0.0025 0.0025 0.0023 0.0023 0.0023 0.0027 0.0025 0.0024 0.0022
    3 0.0012 0.0012 0.0011 0.0011 0.0011 0.0014 0.0012 0.0012 0.0010
    $ T=0.6 $
    (30, 5) 1 0.0200 0.0187 0.0179 0.0179 0.0180 0.0194 0.0187 0.0186 0.0160
    2 0.0184 0.0172 0.0163 0.0163 0.0164 0.0179 0.0171 0.0170 0.0155
    3 0.0121 0.0112 0.0102 0.0101 0.0104 0.0121 0.0112 0.0111 0.0100
    (30, 10) 1 0.0085 0.0082 0.0077 0.0076 0.0077 0.0087 0.0082 0.0081 0.0074
    2 0.0054 0.0052 0.0047 0.0047 0.0048 0.0057 0.0052 0.0051 0.0044
    3 0.0028 0.0027 0.0023 0.0022 0.0023 0.0031 0.0027 0.0026 0.0020
    (60, 5) 1 0.0303 0.0289 0.0284 0.0284 0.0285 0.0293 0.0289 0.0288 0.0282
    2 0.0300 0.0285 0.0280 0.0280 0.0281 0.0290 0.0285 0.0285 0.0278
    3 0.0221 0.0207 0.0200 0.0199 0.0201 0.0214 0.0207 0.0206 0.0198
    (60, 20) 1 0.0078 0.0077 0.0074 0.0074 0.0074 0.0079 0.0076 0.0076 0.0070
    2 0.0045 0.0045 0.0042 0.0042 0.0042 0.0047 0.0044 0.0044 0.0040
    3 0.0019 0.0019 0.0017 0.0017 0.0017 0.0021 0.0019 0.0019 0.0015

     | Show Table
    DownLoad: CSV
    Table 3.  Average values of bias (first row) and MSE (second row) of the different estimates for b = 1.5 under different censoring schemes.
    $ (n, m) $ Sch $ \hat{b}_{ML} $ $ \hat{b}_{BS} $ $ \hat{b}_{EBS1} $ $ \hat{b}_{EBS2} $ $ \hat{b}_{EBS3} $ $ \hat{b}_{BL} $ $ \hat{b}_{EBL1} $ $ \hat{b}_{EBL2} $ $ \hat{b}_{EBL3} $
    $ T=0.2 $
    (30, 5) 1 0.9555 0.9071 0.8972 0.8896 0.9048 0.7845 0.7694 0.7578 0.7809
    0.9227 0.8329 0.8159 0.8029 0.8289 0.6405 0.6199 0.6045 0.6356
    2 0.9253 0.8758 0.8649 0.8565 0.8732 0.7401 0.7233 0.7104 0.7361
    0.8635 0.7745 0.7561 0.7422 0.7702 0.5660 0.5435 0.5267 0.5607
    3 0.7540 0.7020 0.6843 0.6707 0.6979 0.4654 0.4339 0.4100 0.4578
    0.5721 0.4963 0.4720 0.4539 0.4905 0.2267 0.1997 0.1806 0.2200
    (30, 10) 1 0.6448 0.6192 0.6077 0.5989 0.6166 0.4849 0.4693 0.4573 0.4813
    0.4334 0.4006 0.3874 0.3775 0.3975 0.2659 0.2530 0.2434 0.2628
    2 0.5182 0.4946 0.4797 0.4682 0.4911 0.3163 0.2951 0.2789 0.3113
    0.2827 0.2581 0.2445 0.2342 0.2549 0.1266 0.1156 0.1079 0.1239
    3 0.3710 0.3517 0.3323 0.3174 0.3472 0.1128 0.0837 0.0614 0.1060
    0.1474 0.1327 0.1201 0.1109 0.1297 0.0326 0.0287 0.0269 0.0315
    (60, 5) 1 1.0874 1.0453 1.0394 1.0349 1.0440 0.9757 0.9676 0.9613 0.9738
    1.1927 1.1041 1.0924 1.0835 1.1014 0.9728 0.9583 0.9473 0.9694
    2 1.0924 1.0506 1.0449 1.0404 1.0493 0.9831 0.9752 0.9691 0.9812
    1.2028 1.1141 1.1026 1.0939 1.1115 0.9855 0.9714 0.9606 0.9822
    3 0.9780 0.9297 0.9208 0.9139 0.9276 0.8220 0.8091 0.7992 0.8190
    0.9583 0.8662 0.8499 0.8374 0.8624 0.6796 0.6589 0.6433 0.6748
    (60, 20) 1 0.6286 0.6154 0.6096 0.6051 0.6141 0.5526 0.5459 0.5407 0.5511
    0.4045 0.3880 0.3811 0.3758 0.3864 0.3176 0.3106 0.3052 0.3160
    2 0.4871 0.4752 0.4674 0.4614 0.4735 0.3901 0.3809 0.3738 0.3880
    0.2449 0.2333 0.2262 0.2208 0.2317 0.1625 0.1558 0.1507 0.1610
    3 0.3199 0.3109 0.3003 0.2922 0.3085 0.1946 0.1819 0.1720 0.1917
    0.1077 0.1017 0.0955 0.0908 0.1003 0.0453 0.0408 0.0376 0.0443
    $ T=0.5 $
    (30, 5) 1 0.9296 0.8812 0.8702 0.8617 0.8786 0.7414 0.7237 0.7102 0.7372
    0.8824 0.7957 0.7778 0.7643 0.7915 0.5944 0.5731 0.5574 0.5893
    2 0.9392 0.8913 0.8806 0.8723 0.8888 0.7549 0.7377 0.7245 0.7509
    0.9028 0.8163 0.7988 0.7855 0.8122 0.6198 0.5991 0.5839 0.6149
    3 0.9376 0.8882 0.8778 0.8698 0.8858 0.7599 0.7441 0.7321 0.7562
    0.8842 0.7942 0.7763 0.7626 0.7900 0.5900 0.5677 0.5509 0.5847
    (30, 10) 1 0.8807 0.8550 0.8488 0.8440 0.8536 0.7853 0.7775 0.7715 0.7835
    0.7915 0.7467 0.7369 0.7294 0.7444 0.6436 0.6331 0.6250 0.6412
    2 0.7535 0.7265 0.7179 0.7112 0.7245 0.6282 0.6171 0.6085 0.6256
    0.5701 0.5301 0.5178 0.5083 0.5273 0.3983 0.3847 0.3744 0.3952
    3 0.5564 0.5315 0.5179 0.5074 0.5284 0.3701 0.3511 0.3366 0.3657
    0.3133 0.2860 0.2719 0.2613 0.2827 0.1435 0.1303 0.1207 0.1404
    (60, 5) 1 1.0495 1.0048 0.9980 0.9927 1.0032 0.9240 0.9145 0.9072 0.9218
    1.1072 1.0161 1.0028 0.9926 1.0130 0.8660 0.8494 0.8368 0.8621
    2 1.0495 1.0048 0.9980 0.9927 1.0032 0.9240 0.9145 0.9072 0.9218
    1.1072 1.0161 1.0028 0.9926 1.0130 0.8660 0.8494 0.8368 0.8621
    3 1.0683 1.0247 1.0184 1.0135 1.0232 0.9505 0.9418 0.9352 0.9485
    1.1473 1.0566 1.0441 1.0346 1.0538 0.9161 0.9006 0.8888 0.9125
    (60, 20) 1 0.8853 0.8720 0.8691 0.8669 0.8714 0.8414 0.8381 0.8357 0.8406
    0.7872 0.7638 0.7588 0.7549 0.7627 0.7121 0.7068 0.7027 0.7109
    2 0.7200 0.7062 0.7016 0.6980 0.7052 0.6568 0.6515 0.6475 0.6556
    0.5195 0.4999 0.4934 0.4884 0.4984 0.4327 0.4259 0.4207 0.4312
    3 0.5006 0.4884 0.4808 0.4750 0.4867 0.4062 0.3973 0.3904 0.4042
    0.2522 0.2401 0.2328 0.2273 0.2384 0.1671 0.1601 0.1547 0.1655

     | Show Table
    DownLoad: CSV
    Table 4.  Average values of MSE of the different estimates of the reliability function for $ b = 1.5 $ under different censoring schemes.
    $ (n, m) $ Sch $ \hat{R}_{ML} $ $ \hat{R}_{BS} $ $ \hat{R}_{EBS1} $ $ \hat{R}_{EBS2} $ $ \hat{R}_{EBS3} $ $ \hat{R}_{BL} $ $ \hat{R}_{EBL1} $ $ \hat{R}_{EBL2} $ $ \hat{R}_{EBL3} $
    $ T=0.2 $
    (30, 5) 1 0.0955 0.0875 0.0855 0.0839 0.0871 0.0951 0.0931 0.0916 0.0825
    2 0.0875 0.0802 0.0780 0.0764 0.0797 0.0880 0.0859 0.0843 0.0773
    3 0.0521 0.0483 0.0458 0.0439 0.0477 0.0567 0.0542 0.0523 0.0437
    (30, 10) 1 0.0380 0.0369 0.0356 0.0347 0.0366 0.0413 0.0400 0.0390 0.0336
    2 0.0230 0.0229 0.0217 0.0207 0.0226 0.0269 0.0256 0.0246 0.0206
    3 0.0110 0.0116 0.0106 0.0098 0.0114 0.0149 0.0137 0.0129 0.0104
    (60, 5) 1 0.1338 0.1238 0.1222 0.1210 0.1234 0.1300 0.1285 0.1273 0.1204
    2 0.1352 0.1250 0.1235 0.1223 0.1247 0.1312 0.1297 0.1286 0.1200
    3 0.0995 0.0911 0.0891 0.0875 0.0906 0.0987 0.0967 0.0952 0.0866
    (60, 20) 1 0.0346 0.0342 0.0335 0.0330 0.0340 0.0365 0.0358 0.0352 0.0320
    2 0.0193 0.0194 0.0188 0.0183 0.0192 0.0214 0.0207 0.0202 0.0182
    3 0.0077 0.0081 0.0076 0.0073 0.0080 0.0096 0.0091 0.0087 0.0071
    $ T=0.5 $
    (30, 5) 1 0.0911 0.0838 0.0817 0.0801 0.0833 0.0914 0.0893 0.0878 0.0813
    2 0.0940 0.0865 0.0845 0.0829 0.0861 0.0940 0.0920 0.0905 0.0831
    3 0.0900 0.0824 0.0803 0.0786 0.0819 0.0902 0.0882 0.0866 0.0779
    (30, 10) 1 0.0786 0.0752 0.0740 0.0731 0.0749 0.0795 0.0784 0.0775 0.0729
    2 0.0518 0.0498 0.0485 0.0475 0.0495 0.0544 0.0531 0.0521 0.0475
    3 0.0254 0.0252 0.0239 0.0229 0.0249 0.0294 0.0281 0.0271 0.0219
    (60, 5) 1 0.1207 0.1111 0.1094 0.1080 0.1107 0.1179 0.1162 0.1149 0.1007
    2 0.1207 0.1111 0.1094 0.1080 0.1107 0.1179 0.1162 0.1149 0.1005
    3 0.1265 0.1166 0.1150 0.1137 0.1163 0.1232 0.1216 0.1204 0.1000
    (60, 20) 1 0.0774 0.0756 0.0750 0.0745 0.0755 0.0779 0.0773 0.0769 0.0734
    2 0.0461 0.0453 0.0446 0.0441 0.0451 0.0477 0.0470 0.0465 0.0430
    3 0.0197 0.0198 0.0192 0.0187 0.0197 0.0219 0.0212 0.0207 0.0183

     | Show Table
    DownLoad: CSV

    From Tables 14 we have the following observations:

    (i) The average biases decrease as $ m $ increases in all the cases, which indicates that the different estimators used to estimate the parameter $ b $ are asymptotically unbiased.

    (ii) The average MSEs decrease and tend to zero as $ m $ increases in all the cases. Thus the different estimators used to estimate the parameter $ b $ and the reliability function are consistent.

    (iii) The Bayesian and E-Bayesian estimates of the parameter $ b $ perform better than MLE in all the cases in terms of minimum average biases and MSEs.

    (iv) Under SE loss function, the E-Bayesian estimates of the parameter $ b $ have less average biases and MSEs than the Bayes estimate.

    (v) Under LINEX loss function, the E-Bayesian estimates of the parameter $ b $ have less average biases and MSEs than the Bayes estimate.

    (vi) The performance order of E-Bayesian estimates under SE and LINEX loss functions in terms of minimum average biases and MSEs are the estimates using prior distribution 2, then the estimates using prior distribution 1 and then the estimates using prior distribution 3.

    (vii) The E-Bayesian estimate of the parameter $ b $ under LINEX loss function using prior distribution 2 has the smallest average biases and MSEs among all other different estimates in all the cases.

    (viii) The E-Bayesian estimate of reliability function under LINEX loss function using prior distribution 3 perform better than the other estimates in terms of minimum average biases and MSEs.

    (ix) Comparing the three censoring schemes, we observed that Sch 3 has the smallest average biases and MSEs in all the cases followed by Sch 2 and then 1.

    (x) The E-Bayesian estimate of the parameter $ b $ under LINEX loss function using prior distribution 2 has the minimum average biases and MSEs among all the other estimates.

    It is observed that the E-Bayesian estimates under the two loss functions using prior distribution 2 perform better than other estimates. It is known that the density of the prior distribution 2 is a decreasing function in the hyper-parameter $ \theta $ and the density of the prior 3 is an increasing function. From this comparison, we can conclude that when the prior distribution of the hyper-parameter $ \theta $ is decreasing the E-Bayesian estimates perform better than other estimates based on the other priors. Moreover, we obtain E-posterior risk of the parameter $ b $ under SE and LINEX loss functions. Here, we only display the case of $ b = 0.5 $, $ n = 60 $, $ m = (10, 20) $ and $ T = 0.3 $ using the three censoring schemes in Figure 1. From Figure 1, it is observed that the E-Bayesian risk decreases as the number of failure $ m $ increases in all the cases. Also, it is to be noted that under SE loss function the E-Bayesian risk using prior distribution 3 perform better than other prior distributions. Similarly, under LINEX loss function the E-Bayesian risk using prior distribution 3 have the smallest E-posterior risk among all other prior distributions. Finally, the ordering of performance of E-Bayesian risks under SE and LINEX loss functions are the E-Bayesian risk using prior distribution 3, then prior distribution 1, then prior distribution 2.

    Figure 1.  Posterior and E-posterior risk under SE and LINEX loss functions for $ b = 0.5 $ and $ n = 60 $.

    In this section we anlayze a real data set given by Lawless [33]. These data represent the time to breakdown of an insulating fluid between electrodes at a voltage of 34 k.v. Zimmer et al.[34], showed that the Burr type-XII distribution is suitable to fit these data. The original data set consists of 19 observations. We used the maximum likelihood method to obtain the estimates of the parameters $ a $ and $ b $ from the complete data set. The MLEs of $ a $ and $ b $ are 1.7379 and 0.2936, respectively. Mahmoud et al.[8], used these data to generate two adaptive progressively censored samples by considering $ m = 10 $, $ T = 6, 9 $ and $ \boldsymbol{R} = \{3, 0, 0, 0, 3, 0, 0, 0, 0\} $. The generated samples are

    Sample 1 ($ T = 6 $)0.190.780.961.312.874.154.856.536.7172.89
    Sample 2 ($ T = 9 $)0.190.780.961.312.873.164.858.2712.0672.89

     | Show Table
    DownLoad: CSV

    These data are also analyzed by Nassar et al. [12]. Here, we assume that the parameter $ a $ is known and equal to $ 1.7379 $ and use the two adaptive progressively hybrid censored samples to estimate the unknown parameter $ b $. To compute the Bayesian and E-Bayesian estimates, we consider the case of noninformative priors by choosing the hyperparamters to be $ b\sim gamma(0.01, 0.01) $.

    The MLE, Bayesian and E-Bayesian estimates of the parameter $ b $ are obtained and displayed in Table 5. The Bayesian and E-Bayeian risk are also obtained and presented in Table 5. From the observation matrix we obtain the variance of the MLE of $ b $. From Table 5, it is observed that the $ \hat{b}_{EBS2} $ under SE and $ \hat{b}_{EBL2} $ under LINEX loss function are closer to the true value of $ b $ more than the other estimates. Also, the E-Bayesian risk using the prior distribution 3 under SE and LINEX loss functions has the minimum risk among all the other priors. These results coincide with the results discussed before in the simulation section. Table 6 shows the different estimates of the reliability function by considering different values of $ x $. Comparing the different estimates of the reliability function given in Table 6, we can conclude that the estimates based on prior distribution 2 under SE and LINEX loss function are closer to the true value of the reliability function that is based on the parameter values obtained from the complete sample.

    Table 5.  MLE, Bayesian and E-Bayesian estimates of $ b $ (first row) and variance, posterior risk and E-posterior risk (second row) for the real data.
    Sample $ \hat{b}_{ML} $ $ \hat{b}_{BS} $ $ \hat{b}_{EBS1} $ $ \hat{b}_{EBS2} $ $ \hat{b}_{EBS3} $ $ \hat{b}_{BL} $ $ \hat{b}_{EBL1} $ $ \hat{b}_{EBL2} $ $ \hat{b}_{EBL3} $
    Sample 1 ($ T=6 $) 0.18258 0.18273 0.18667 0.19066 0.18268 0.18357 0.18748 0.19150 0.18345
    0.00333 0.00334 0.00320 0.00334 0.00306 0.00042 0.00040 0.00042 0.00038
    Sample 2 ($ T=9 $) 0.18931 0.18946 0.19070 0.19569 0.18570 0.19036 0.19154 0.19658 0.18649
    0.00333 0.00359 0.00334 0.00352 0.00317 0.00045 0.00042 0.00044 0.00040

     | Show Table
    DownLoad: CSV
    Table 6.  MLE, Bayesian and E-Bayesian estimates of reliability function for the real data.
    Sample $ x $ $ \hat{R}_{ML} $ $ \hat{R}_{BS} $ $ \hat{R}_{EBS1} $ $ \hat{R}_{EBS2} $ $ \hat{R}_{EBS3} $ $ \hat{R}_{BL} $ $ \hat{R}_{EBL1} $ $ \hat{R}_{EBL2} $ $ \hat{R}_{EBL3} $
    Sample 1 ($ T=6 $) 1 0.8811 0.8817 0.8793 0.8769 0.8817 0.8820 0.8796 0.8772 0.8820
    10 0.4800 0.4922 0.4842 0.4769 0.4914 0.4951 0.4869 0.4797 0.4941
    50 0.2889 0.3100 0.3013 0.2940 0.3086 0.3131 0.3042 0.2968 0.3115
    Sample 2 ($ T=9 $) 1 0.8770 0.8777 0.8769 0.8739 0.8799 0.8780 0.8770 0.8739 0.8802
    10 0.4672 0.4800 0.4770 0.4681 0.4859 0.4830 0.4797 0.4708 0.4886
    50 0.2760 0.2976 0.2941 0.2852 0.3031 0.3008 0.2970 0.2881 0.3059

     | Show Table
    DownLoad: CSV

    In this paper we have investigated the E-Bayesian estimation of the parameter and the reliability function of the Burr type-XII distribution based on A-II PHCS. The E-Bayesian estimation is considered by using three different prior distributions under two loss functions, namely the SE and LINEX loss functions. The properties of the E-Bayesian estimation as well as the E-posterior risk are also derived. We compared the performance of the E-Bayesian estimation with the maximum likelihood and Bayesian estimators via an extensive simulation study. The simulation results revealed that the E-Bayesian estimation perform better than the maximum likelihood and Bayesian estimators in terms of minimum biases and MSEs. Moreover, we analyzed one real data set for illustration purpose and the results are coincide with those in the simulation section. As a future work, the E-Bayesian estimation for the Burr type-XII distribution under A-II PHCS is still an open problem when the two parameters are unknown. Another future work is to obtain the E-Bayesian estimates for the parameters of the Burr type-XII distribution under A-II PHCS using different prior distributions for the hyper-parameters.

    The authors would like to express their thanks to the editor and anonymous referees for useful suggestions and comments. This project was funded by the Deanship Scientific Research (DSR), King Abdulaziz University, Jeddah under grant no. (G: 537-130-1441). The authors, therefore, acknowledge with thanks DSR for technical and Financial support.

    The authors declare there is no conflicts of interest in this paper.



    [1]

    M. S. Birman and M. Z. Solomjak, Spectral Theory of Selfadjoint Operators in Hilbert Spaces, Mathematics and its Applications (Soviet Series). D. Reidel Publishing Co., Dordrecht, 1987.

    [2] Gap opening and split band edges in waveguides coupled by a periodic system of small windows. Math. Notes (2013) 93: 660-675.
    [3]

    D. I. Borisov and K. V. Pankrashkin, Quantum waveguides with small periodic perturbations: Gaps and edges of Brillouin zones, Journal of Physics A: Mathematical and Theoretical, 46 (2013), 235203, 18 pp.

    [4]

    C. Conca, J. Planchard and M. Vanninathan, Fluids and Periodic Structures, RAM: Research in Applied Mathematics, 38. John Wiley & Sons, Ltd., Chichester, Masson, Paris, 1995.

    [5] A strange term coming from nowhere. Topics in the Mathematical Modelling of Composite Materials, Progr. Nonlinear Differential Equations Appl., Birkhäuser, Boston (1997) 31: 45-93.
    [6] Expansion in characteristic functions of an equation with periodic coefficients. Doklady Akad. Nauk SSSR(N.S.) (1950) 73: 1117-1120.
    [7] A boundary value problem for the elliptic equation of second order in a domain with a narrow slit. 1. The two-dimensional case. Math. USSR-Sb. (1976) 28: 459-480.
    [8]

    A. M. Il'in, Matching of Asymptotic Expansions of Solutions of Boundary Value Problems, Translations of Mathematical Monographs, 102. American Mathematical Society, Providence, RI, 1992.

    [9]

    T. Kato, Perturbation Theory for Linear Operators, Die Grundlehren der mathematischen Wissenschaften, Band 132 Springer-Verlag New York, Inc., New York, 1966.

    [10] Boundary value problems for elliptic equations in domains with conical or angular points. Trudy Moskov. Mat. Obshch. (1967) 16: 209-292.
    [11]

    P. Kuchment, Floquet Theory for Partial Differential Equations, Operator Theory: Advances and Applications, 60. Birkhäuser Verlag, Basel, 1993.

    [12]

    N. S. Landkof, Foundations of Modern Potential Theory, Die Grundlehren der Mathematischen Wissenschaften, Band 180. Springer-Verlag, New York-Heidelberg, 1972.

    [13]

    D. Leguillon and E. Sánchez-Palencia, Computations of Singular Solutions in Elliptic Problems and Elasticity, John Wiley & Sons, Ltd., Chichester, Masson, Paris, 1987.

    [14]

    M. Lobo, O. A. Oleinik, E. Perez and T. A. Shaposhnikova, On homogenization of solutions of boundary value problems in domains, perforated along manifolds, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4), 25 (1998), 611–629.

    [15]

    M. Lobo and E. Pérez, On the local vibrations for systems with many concentrated masses near the boundary, C.R. Acad. Sci. Paris, Ser. IIb, 324 (1997), 323-329.

    [16]

    V. A. Marchenko and E. Y. Khruslov, Homogenization of Partial Differential Equations, Progress in Mathematical Physics, 46. Birkhäuser Boston, Inc., Boston, MA, 2006.

    [17]

    V. Maz'ya, S. Nazarov and B. Plamenevskij, Asymptotic Theory of Elliptic Boundary Value Problems in Singularly Perturbed Domains, Vol. I, Operator Theory: Advances and Applications, 111. Birkhäuser Verlag, Basel, 2000.

    [18] Problème d'écrans perforés pour l'équation de Laplace. RAIRO. Modél. Math. Anal. Numér. (1985) 19: 33-63.
    [19] Asymptotic conditions at a point, self-adjoint extensions of operators and the method of matched asymptotic expansions. Proceedings of the St. Petersburg Mathematical Society, Vol. V, Amer. Math. Soc. Transl. Ser. 2, Amer. Math. Soc., Providence, RI (1999) 193: 77-125.
    [20] The polynomial property of self-adjoint elliptic boundary-value problems and the algebraic description of their attributes. Russ. Math. Surveys (1999) 54: 947-1014.
    [21] Opening of a gap in the continuous spectrum of a periodically perturbed waveguide. Mathematical Notes (2010) 87: 738-756.
    [22] Asymptotic behavior of spectral gaps in a regularly perturbed periodic waveguide. Vestnik St. Petersburg Univ. Mathematics (2013) 46: 89-97.
    [23]

    S. A. Nazarov, R. Orive-Illera and M.-E. Pérez-Martínez, On the polarization matrix for a perforated strip, in Integral Methods in Science and Engineering: Analytic Treatment and Numerical Approximations, Birkhauser, N.Y., (2019), 267–281.

    [24] New asymptotic effects for the spectrum of problems on concentrated masses near the boundary. Comptes Rendues de Mécanique (2009) 337: 585-590.
    [25] On multi-scale asymptotic structure of eigenfunctions in a boundary value problem with concentrated masses near the boundary. Rev. Mat. Complut. (2018) 31: 1-62.
    [26]

    S. A. Nazarov and B. A. Plamenevskii, Elliptic Problems in Domains with Piecewise Smooth Boundaries, De Gruyter Expositions in Mathematics, 13. Walter de Gruyter & Co., Berlin, 1994.

    [27]

    O. A. Oleinik, A. S. Shamaev and G. A. Yosifia, Mathematical Problems in Elasticity and Homogenization, Studies in Mathematics and its Applications, 26. North-Holland Publishing Co., Amsterdam, 1992.

    [28] Higher order asymptotics of solutions of problems on the contact of periodic structures. Mat. Sb. (N.S.) (1979) 110(152): 505-538.
    [29]

    G. Polya and G. Szegö, Isoperimetric Inequalities in Mathematical Physics, Annals of Mathematics Studies, no. 27, Princeton University Press, Princeton, N. J., 1951.

    [30] (1978) Methods of Modern Mathematical Physics. IV. Analysis of Operators. New York-London: Academic Press.
    [31]

    J. Sanchez-Hubert and E. Sánchez-Palencia, Vibration and Coupling of Continuous Systems. Asymptotic Methods, Springer-Verlag, Berlin, 1989.

    [32] Un problème d'ecoulement lent d'un fluide incompressible au travers d'une paroi finement perforée. Homogenization Methods: Theory and Applications in Physics, Collect. Dir. Études Rech. Élec. France, Eyrolles, Paris (1985) 57: 371-400.
    [33]

    M. M. Skriganov, Geometric and arithmetic methods in the spectral theory of multidimensional periodic operators, Trudy Mat. Inst. Steklov., 171 (1985), 122 pp.

    [34] Band spectrum of the Laplacian on a slab with the Dirichlet boundary condition on a grid. Kyushu J. Math. (2003) 57: 87-116.
    [35]

    M. Van Dyke, Perturbation Methods in Fluid Mechanics, Applied Mathematics and Mechanics, Vol. 8 Academic Press, New York-London, 1964.

  • This article has been cited by:

    1. Zhifang Li, Huihong Zhao, Yunlong Shang, State estimation for censored system with colored noises: system reconstruction approach, 2022, 2183, 1742-6588, 012015, 10.1088/1742-6596/2183/1/012015
    2. Hassan Okasha, Yuhlong Lio, Mohammed Albassam, On Reliability Estimation of Lomax Distribution under Adaptive Type-I Progressive Hybrid Censoring Scheme, 2021, 9, 2227-7390, 2903, 10.3390/math9222903
    3. Mazen Nassar, Refah Alotaibi, Ahmed Elshahhat, Complexity Analysis of E-Bayesian Estimation under Type-II Censoring with Application to Organ Transplant Blood Data, 2022, 14, 2073-8994, 1308, 10.3390/sym14071308
    4. Refah Alotaibi, Mazen Nassar, Ahmed Elshahhat, Computational Analysis of XLindley Parameters Using Adaptive Type-II Progressive Hybrid Censoring with Applications in Chemical Engineering, 2022, 10, 2227-7390, 3355, 10.3390/math10183355
    5. Majd Alslman, Amal Helu, Lili Yu, Estimation of the stress-strength reliability for the inverse Weibull distribution under adaptive type-II progressive hybrid censoring, 2022, 17, 1932-6203, e0277514, 10.1371/journal.pone.0277514
    6. Zhifang Li, Huihong Zhao, Hailong Meng, Yong Chen, Variable step size predictor design for a class of linear discrete-time censored system, 2021, 6, 2473-6988, 10581, 10.3934/math.2021614
    7. Heba S. Mohammed, Empirical E-Bayesian estimation for the parameter of Poisson distribution, 2021, 6, 2473-6988, 8205, 10.3934/math.2021475
    8. Ahmed Elshahhat, Mazen Nassar, Inference of improved adaptive progressively censored competing risks data for Weibull lifetime models, 2023, 0932-5026, 10.1007/s00362-023-01417-0
    9. Monthira Duangsaphon, Sukit Sokampang, Kannat Na Bangchang, Bayesian estimation for median discrete Weibull regression model, 2024, 9, 2473-6988, 270, 10.3934/math.2024016
    10. 莉 张, Bayesian Analysis of Chen Distributionunder Improved Adaptive ProgressiveType-II Censoring, 2024, 14, 2160-7583, 400, 10.12677/PM.2024.141040
    11. Yarong Yu, Liang Wang, Sanku Dey, Jia Liu, Estimation of stress-strength reliability from unit-Burr Ⅲ distribution under records data, 2023, 20, 1551-0018, 12360, 10.3934/mbe.2023550
  • Reader Comments
  • © 2019 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(3159) PDF downloads(381) Cited by(6)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog