Research article Special Issues

Analysis of information measures using generalized type-Ⅰ hybrid censored data

  • An entropy measure of uncertainty has a complementary dual function called extropy. In the last six years, this measure of randomness has gotten a lot of attention. It cannot, however, be applied to systems that have survived for some time. As a result, the idea of residual extropy was created. To estimate the extropy and residual extropy, Bayesian and non-Bayesian estimators of unknown parameters of the exponentiated gamma distribution are generated. Bayesian estimators are regarded using balanced loss functions like the balanced squared error, balanced linear exponential and balanced general entropy. We use the Lindley method to get the extropy and residual extropy estimates for the exponentiated gamma distribution based on generalized type-Ⅰ hybrid censored data. To test the effectiveness of the proposed methodologies, a simulation experiment was carried out, and the actual data set was studied for illustrative purposes. In summary, the mean squared error values decrease as the number of failures increases, according to the results obtained. The Bayesian estimates of residual extropy under the balanced linear exponential loss function perform well compared to the other estimates. Alternatively, the Bayesian estimates of the extropy perform well under a balanced general entropy loss function in the majority of situations.

    Citation: Baria A. Helmy, Amal S. Hassan, Ahmed K. El-Kholy, Rashad A. R. Bantan, Mohammed Elgarhy. Analysis of information measures using generalized type-Ⅰ hybrid censored data[J]. AIMS Mathematics, 2023, 8(9): 20283-20304. doi: 10.3934/math.20231034

    Related Papers:

    [1] H. M. Barakat, M. A. Alawady, I. A. Husseiny, M. Nagy, A. H. Mansi, M. O. Mohamed . Bivariate Epanechnikov-exponential distribution: statistical properties, reliability measures, and applications to computer science data. AIMS Mathematics, 2024, 9(11): 32299-32327. doi: 10.3934/math.20241550
    [2] Mansour Shrahili, Mohamed Kayid, Mhamed Mesfioui . Stochastic inequalities involving past extropy of order statistics and past extropy of record values. AIMS Mathematics, 2024, 9(3): 5827-5849. doi: 10.3934/math.2024283
    [3] M. Nagy, H. M. Barakat, M. A. Alawady, I. A. Husseiny, A. F. Alrasheedi, T. S. Taher, A. H. Mansi, M. O. Mohamed . Inference and other aspects for $ q- $Weibull distribution via generalized order statistics with applications to medical datasets. AIMS Mathematics, 2024, 9(4): 8311-8338. doi: 10.3934/math.2024404
    [4] Mohamed Said Mohamed, Haroon M. Barakat, Aned Al Mutairi, Manahil SidAhmed Mustafa . Further properties of Tsallis extropy and some of its related measures. AIMS Mathematics, 2023, 8(12): 28219-28245. doi: 10.3934/math.20231445
    [5] M. R. Irshad, S. Aswathy, R. Maya, Amer I. Al-Omari, Ghadah Alomani . A flexible model for bounded data with bathtub shaped hazard rate function and applications. AIMS Mathematics, 2024, 9(9): 24810-24831. doi: 10.3934/math.20241208
    [6] Mohamed Said Mohamed, Najwan Alsadat, Oluwafemi Samson Balogun . Continuous Tsallis and Renyi extropy with pharmaceutical market application. AIMS Mathematics, 2023, 8(10): 24176-24195. doi: 10.3934/math.20231233
    [7] Ramy Abdelhamid Aldallal, Haroon M. Barakat, Mohamed Said Mohamed . Exploring weighted Tsallis extropy: Insights and applications to human health. AIMS Mathematics, 2025, 10(2): 2191-2222. doi: 10.3934/math.2025102
    [8] Faten Alrewely, Mohamed Kayid . Extropy analysis in consecutive r-out-of-n:G systems with applications in reliability and exponentiality testing. AIMS Mathematics, 2025, 10(3): 6040-6068. doi: 10.3934/math.2025276
    [9] Haiping Ren, Xue Hu . Estimation for inverse Weibull distribution under progressive type-Ⅱ censoring scheme. AIMS Mathematics, 2023, 8(10): 22808-22829. doi: 10.3934/math.20231162
    [10] Amal S. Hassan, Najwan Alsadat, Oluwafemi Samson Balogun, Baria A. Helmy . Bayesian and non-Bayesian estimation of some entropy measures for a Weibull distribution. AIMS Mathematics, 2024, 9(11): 32646-32673. doi: 10.3934/math.20241563
  • An entropy measure of uncertainty has a complementary dual function called extropy. In the last six years, this measure of randomness has gotten a lot of attention. It cannot, however, be applied to systems that have survived for some time. As a result, the idea of residual extropy was created. To estimate the extropy and residual extropy, Bayesian and non-Bayesian estimators of unknown parameters of the exponentiated gamma distribution are generated. Bayesian estimators are regarded using balanced loss functions like the balanced squared error, balanced linear exponential and balanced general entropy. We use the Lindley method to get the extropy and residual extropy estimates for the exponentiated gamma distribution based on generalized type-Ⅰ hybrid censored data. To test the effectiveness of the proposed methodologies, a simulation experiment was carried out, and the actual data set was studied for illustrative purposes. In summary, the mean squared error values decrease as the number of failures increases, according to the results obtained. The Bayesian estimates of residual extropy under the balanced linear exponential loss function perform well compared to the other estimates. Alternatively, the Bayesian estimates of the extropy perform well under a balanced general entropy loss function in the majority of situations.



    In experiential life testing, it is preferable to stop the trial before all of the elements fail due to funding and time constraints. The observations that result from that condition are known as censored samples, and there is a variety of censoring procedures. If the test is conducted at a predefined censoring time it is called type Ⅰ (T-Ⅰ) censoring. The test is accomplished after a specified number of failures in type Ⅱ (T-Ⅱ). The hybrid censoring scheme (HCS) combines T-Ⅰ and T-Ⅱ censoring techniques with the following characteristics: In a life-testing situation, suppose there are n items that are alike. Suppose that they have independent and identical lifetime distributions. The ordered failure times of these objects will be X1:n,X2:n,...,Xn:n. The test is completed when a predetermined number of elements, 1rn, r fail, or when a predetermined duration T(0,) ends. HCS types Ⅰ and Ⅱ are the two types of hybrid censorship proposed in [1].

    The T-Ⅰ HCS completes the life-testing experiment at a random time T1=min(xr:n,T). The T-Ⅰ HCS has the drawback of having extremely few failures until the specified time T1. To overcome this problem, Childs et al. [2] proposed the T-Ⅱ HCS, which guarantees an established failure rate and a completion time of T2=max(xr:n,T). Nevertheless, the T-Ⅱ HCS guarantees a certain number of failures, but identifying and conducting them may take some time for the life test, which is a drawback. Chandrasekar et al. [3] extended these techniques by investigating two extensions of this type, known as generalized type-Ⅰ HCS (GT-Ⅰ HCS) and generalized type-Ⅱ HCS (GT-Ⅱ HCS). Our interest here in the GT-Ⅰ HCS can be described below.

    In the GT-Ⅰ HCS, one specifies k,r(1,2,...,n) and time (0<T<), where k<r. When the kth failure is observed after the time T, in this position, T=xk:n. When the kth failure is observed before the time T, in this situation, T=min(xr:n,T). Consequently, the GT-Ⅰ HCS improves the T-Ⅰ HCS by enabling the experiment to proceed after T if there have been very few failures up to that point (see Figure 1):

    Figure 1.  Schematic representation of the GT-Ⅰ HCS.

    From Figure 1, we summarized the GT-Ⅰ HCS as follows:

    Ⅰ: If x1:n<x2:n<...<T<...<xk:n, in this situation, T=xk:n.

    Ⅱ: If x1:n<...<xk:n<...<xr:n<..<T, in this situation, T=xr:n.

    Ⅲ: If x1:n<...<xk:n<...<T<...<xr:n, in this situation, T=T.

    Assume that X is a non-negative random variable with the probability density function (pdf) f(x). Shannon [4] defined entropy as follows to measure the uncertainty contained in X:

    H(f)=f(x)logf(x)dx, (1.1)

    where f(x) is the pdf of a random variable X. Estimation studies for Shannon entropy with various censoring and distribution strategies can be found in [5,6,7,8]. Ahmadini et al. [9] examined a Bayesian estimate (BE) of dynamic cumulative residual entropy based on the Pareto Ⅱ distribution. Dynamic cumulative residual Renyi entropy estimators for Lomax distribution were considered in [10]. References [11,12] used the record value data to investigate a Bayesian entropy estimator for Lomax and generalized inverse exponential distributions, respectively. Almarashi et al. [13] looked at the Bayesian estimator of dynamic cumulative residual entropy for the Lindley distribution. Hassan et al. [14] studied the statistical inference of information measures for a power-function model in the presence of outliers. Helmy et al. [15] proposed Shannon entropy for the Lomax model in the context of unified hybrid censored samples. In the paper by Hassan et al. [16], estimation of differential entropy for Pareto distribution in the presence of outliers was considered. The logical entropy was suggested in [17] as a new information measure. Ellerman [17] also defined logical mutual information and logical conditional entropy and discussed the relation of logical entropy to Shannon's entropy. For more details about logical entropy and its application to quantum states and fuzzy probability spaces see [17,18,19].

    Despite Shannon's entropy's enormous success, it has certain flaws and might not always be appropriate. Extropy, a different measure of uncertainty that expands on Shannon's entropy, has been suggested as a way to fix these flaws. In the paper by Lad et al. [20], extropy was discussed as an alternate measure of uncertainty and as the complementary dual of entropy. The extropy is provided via

    ψ(x)=120f2(x)dx. (1.2)

    The scoring of forecasting distributions is one statistical application of extropy. A forecasting distribution's predicted score, for example, is equal to the negative sum of its entropy and extropy under the total log scoring rule [21]. Extropy has been widely studied in commercial and scientific fields, such as astronomical studies of heat distributions in galaxies [22]. Qiu [23] investigated some characterization results, monotone qualities, lower bounds of extropy of order statistics and record values. Residual extropy was introduced in [24] to assess the residual uncertainty of a non-negative random variable, as follows:

    ψt(x)=12¯F2(t)tf2(x)dx, (1.3)

    where ¯F(.) is the survival function. Since 2015, important properties of the extropy measure have been studied in the literature. References [23,24], for example, looked at qualities such as residual extropy, ordered statistics extropy and record value extropy. Raqab and Qiu [25] recently investigated several properties of the extropy measure under ranked set sampling. In contrast, some authors have lately studied the problem of estimating extropy depending on a complete sample [26]. Based on progressive T-Ⅱ censoring, Hazeb et al. [27] investigated non-parametric estimation of the extropy and entropy measures. Hassan et al. [28] discussed estimating the extropy and cumulative residual extropy of the Pareto distribution in the presence of outliers.

    The most often used model for examining skewed data and hydrological processes is the gamma distribution. The exponentiated gamma distribution (EGD) is one of the crucial families of distributions in lifetime testing. Both monotonic and nonmonotonic failure rates may be accommodated by this model thanks to its adaptability. On the other hand, the idea of extropy has found use in a variety of domains. It should be emphasized that the literature has paid little attention to the parametric estimation problem of extropy and associated residuals. To the best of the authors' knowledge, and considering the significance of the EGD and extropy measures, the Bayesian and non-Bayesian estimators of these measures are not presented. Additionally, this issue becomes quite significant when the data are censored. In the current study, we use the GT-Ⅰ HCS, which is an approach that improves the T-Ⅰ HCS. Therefore, the main motivation behind this may be summarized as follows:

    ● Extropy and residual extropy of the EGD are examined using the maximum likelihood (ML) and Bayesian estimation methods.

    ● The Bayesian estimators for the extropy and residual extropy measures are created using some balanced loss functions (BLOFs).

    ● Lindley's approximation is used to calculate the Bayesian estimators of extropy and residual extropy under a BLOF.

    ● Both the simulation problem and application to actual data are discussed.

    The rest of the paper is organized as follows. The extropy and residual extropy expressions of the EGD are developed in Section 2. The ML estimators of extropy and residual extropy based on the GT-Ⅰ HCS are discussed in Section 3. The Lindley method for calculating Bayesian estimators of extropy measures under different BLOFs is discussed in Section 4. The simulation issue and its application to real data are analyzed in Sections 5 and 6 respectively. Eventually, we conclude the paper in Section 7.

    A number of distributions have been proposed for monotonic failure rates, but the Weibull and gamma distributions are the most commonly employed. The survival function of the gamma distribution cannot be written in nice closed forms, which makes it difficult to make further mathematical modifications. For such a distribution, the survival and hazard functions are often computed numerically. This is one of the main reasons why the gamma distribution is less popular than the Weibull distribution. Although the Weibull distribution offers a good closed form for the hazard and survival functions, it does have some disadvantages. The EGD was investigated in [29] as an alternative to gamma and Weibull distributions, which has a cumulative distribution function (cdf) F(x) and pdf f(x) of the following respective forms:

    F(x;ξ,γ)=(1(1+γx)eγx)ξξ,γ,x>0, (2.1)

    and

    f(x;ξ,γ)=ξγ2xeγx[1(1+γx)eγx]ξ1ξ,γ,x>0, (2.2)

    where ξ is the shape parameter and γ is the scale parameter. The EGD has received a lot of attention. Shawky and Bakoban [30] offered Bayesian and non-Bayesian estimators for this distribution's parameters and some features for the EGD under record values. Shawky and Bakoban [31] also reported inference on this model's order statistics and developed improved goodness-of-fit tests for the EGD. Feroze and Aslam [32] introduced Bayesian analysis of the EGD for T-Ⅱ censored samples. Singh et al. [33] investigated Bayesian estimation of the EGD under progressive T-Ⅱ censoring by utilizing various approximation techniques. Mahmoud et al. [34,35] studied Bayesian estimation and prediction of the EGD under the unified hyper-censoring scheme.

    Substituting Eq (2.2) into Eq (1.2) will give the extropy of the EGD:

    ψ(x)=120(ξγ2xeγx[1(1+γx)eγx]ξ1)2dx=ξ2γ420x2e2γx[1(1+γx)eγx]2ξ2dx; (2.3)

    from the binomial theorem,

    [1(1+γx)eγx]2ξ2=j=0(1)j(2ξ2j)((1+γx)eγx)j;

    then, Eq (2.3) becomes

    ψ(x)=ξ2γ42j=0(1)j(2ξ2j)0x2e2γx[(1+γx)eγx]jdx=ξ2γ42j=0(1)j(2ξ2j)0x2e(2+j)γx(1+γx)jdx=ξ2γ42j=0jv=0(1)jγv(2ξ2j)(jv)0x2+ve(2+j)γxdx; (2.4)

    then,

    ψ(x)=ξ2γ2j=0jv=0(1)j(2ξ2j)(jv)Γ(v+3)(2+j)v+3. (2.5)

    To find the residual extropy of the EGD, substituting Eq (2.2) into Eq (1.3), we get

    ψt(x)=12((1(1+γt)eγt)ξ)2t(ξγ2xeγx[1(1+γx)eγx]ξ1)2dx, (2.6)

    and

    I=t(ξγ2xeγx[1(1+γx)eγx]ξ1)2dx. (2.7)

    Employing the binomial theory more than one time, we get

    I=ξ2γ4j=0jv=0(1)jγv(2ξ2j)(jv)tx2+ve(2+j)γxdx, (2.8)

    where tx2+ve(2+j)γxdx is the upper incomplete gamma function and is provided via Γ(v+3,tγ(2+j))(2γ+jγ)v+3; then,

    I=ξ2γ4j=0jv=0(1)jγv(2ξ2j)(jv)Γ(v+3,tγ(2+j))(2γ+jγ)v+3, (2.9)

    and the residual extropy of the EGD is calculated below:

    ψt(x)=ξ2γ2((1(1+γt)eγt)ξ)2j=0jv=0(1)j(2+j)v+3(2ξ2j)(jv)Γ(v+3,tγ(2+j)). (2.10)

    It can be noted that Eqs (2.5) and (2.10) are each a function of parameters ξ and γ, which constitute the needed formulations of ψ(x) and ψt(x) of the EGD.

    Here, the ML estimators for the GED are provided via the GT-Ⅰ HCS. Assume that, in a life-testing study, there are n similar elements; let X1:n,X2:n,...,Xn:n indicate the sorted failure times for these elements, with fixed values of r,k1,2,...,n, k<r<n and time T(0,). The likelihood function of ξ and γ is given by

    L(x_|ξ;γ)=n!(nD)![Di=1f(xi:n)][1F(c)]nD, (3.1)

    where D is the experiment's total number of failures until time c, and its values are represented by

    (D,c)={(k,xk:n)for case I(d,T)for case II(r,xr:n)for case III, (3.2)

    where d denotes the number of failures that occurred until time T. Substituting Eqs (2.1) and (2.2) into Eq (3.1), we get

    L(x_|ξ,γ)=n!(nD)![Di=1ξγ2xieγxi[1(1+γxi)eγxi]ξ1]×[1(1(1+γc)eγc)ξ]nD, (3.3)

    where xi is written instead of xi:n for simplified form. Taking the two sides' logarithms, say, l, we get

    lDlnξ+2Dlnγ+Di=1ln(xi)γDi=1xi+(ξ1)Di=1ln[1(1+γxi)eγxi]+(nD)ln[1(1(1+γc)eγc)ξ]. (3.4)

    If we take derivatives of Eq (3.4) with regard to ξ and γ, we can obtain

    lξ=Dξ+Di=1ln[1(1+γxi)eγxi](nD)(1(1+γc)eγc)ξln[1(1+γc)eγc]1[1(1+γc)eγc]ξ, (3.5)

    and

    lγ=2DγDi=1xi+(ξ1)Di=1γx2ieγxi1(1+γxi)eγxiγξc2(nD)eγc[1(1+γc)eγc](ξ1)1[1(1+γc)eγc]ξ. (3.6)

    Set Eqs (3.5) and (3.6) equal to zero and solve them to determine the ML estimator of ξ and γ:

    Dˆξ+Di=1ln[1(1+ˆγxi)eˆγxi](nD)ln[1(1+ˆγc)eˆγc][1(1+ˆγc)eˆγc]ˆξ1[1(1+ˆγc)eˆγc]ˆξ=0, (3.7)

    and

    2DˆγDi=1xi+(ˆξ1)Di=1ˆγx2ieˆγxi1(1+ˆγxi)eˆγxiˆγˆξc2(nD)eˆγc[1(1+ˆγc)eˆγc](ˆξ1)1[1(1+ˆγc)eˆγc]ˆξ=0. (3.8)

    The explicit forms for these equations seem to be quite difficult to get; thus, we may use an appropriate numerical approach to obtain these estimators. Then, the ML estimators of ψ(x), ψt(x), say, ˆψ(x),ˆψt(x), are respectively as follows:

    ˆψ(x)=ˆξ2ˆγ2j=0jv=0(1)j(2ˆξ2j)(jv)Γ(v+3)(2+j)v+3, (3.9)

    and

    ˆψt(x)=ξ2ˆγ2((1(1+ˆγt)eˆγt)ˆξ)2j=0jv=0(1)j(2+j)v+3(2ˆξ2j)(jv)Γ(v+3,tˆγ(2+j)). (3.10)

    Using different types of BLOFs, we can find Bayesian estimators for ξ, γ, ψ(x) and ψt(x). We assume that ξ and γ are distributed separately as gamma (a1,b1) and gamma (a2,b2) priors, respectively, since the gamma distribution is utilized as a conjugate prior for some distributions and, at the same time, it is a conjugate prior for the EGD (see [32]). Then, the prior of ξ and γ is given by

    π1(ξ)(ξa11eb1ξ),ξ>0,

    and,

    π2(γ)(γa21eγb2),γ>0,

    where a1,a2,b1 and b2>0 are considered to be constant and known hyperparameters. The joint prior density of ξ and γ is calculated as follows:

    π(ξ,γ)ξa11γa21e(b1ξ+b2γ). (4.1)

    The posterior distribution is calculated as follows:

    π(ξ,γ|x_)=L(x_|ξ,γ)π(ξ,γ)00L(x_|ξ,γ)π(ξ,γ)dξdγ. (4.2)

    The joint posterior density function is calculated from Eqs (3.3) and (4.1) as follows:

    π(ξ,γ|x_)=E1ξD+a11γa2+2D1eγ(b2+Di=1xi)×eξb1+(ξ1)Di=1log[1(1+γxi)eγxi][1(1(1+γc)eγc)ξ]nD; (4.3)

    E1 is the normalizing constant, which is equal to

    E1=100L(x_|ξ,γ)π(ξ,γ)dξdγ. (4.4)

    BLOFs are interesting because they include the proximity of a specified estimator δ to both a target estimator δo and the unknown parameter θ that has been estimated, as stated by Zellner's formula (see [36]):

    Lρ,ω,δo(ξ,ρ)=ωρ(δ,δo)+(1ω)ρ(θ,δ), (4.5)

    where 0ω1, ρ(θ,δ) is any loss function that can be used, ρ(δ,δo) is an unbalanced loss function for the likelihood function and δo is a chosen θ prior estimator.

    Given the balanced squared error (BSEL) loss function ρ(θ,δ)=(δθ)2, then Eq (4.5) becomes

    Lρ,ω,δo=ω(δδo)2+(1ω)(δθ)2.

    In this situation, the BE of θ is given by

    ˆθBSEL=ωˆθ+(1ω)E(θ|x_), (4.6)

    where θ=(ξ,γ,ψ(x) and ψt(x)). Hence,

    ˆθBSEL=ωˆθ+(1ω)00θL(x_|ξ,γ)π(ξ,γ)dξdγ00L(x_|ξ,γ)π(ξ,γ)dξdγ. (4.7)

    If we choose

    ρ(θ,δ)=eq(δδo)q(δθ)1,

    where q0, we get the balanced linear exponential (BLN) loss function, and the BE of θ in this situation is

    ˆθBLN=1qlog[ωeqˆθ+(1ω)00eqθL(x_|ξ,γ)π(ξ,γ)dξdγ00L(x_|ξ,γ)π(ξ,γ)dξdγ]. (4.8)

    If we choose

    ρ(δ,θ)=(δoθ)qqlog(δoθ)1,

    when q0, we can get the balanced general entropy (BGE) loss function, and the BE of θ in this situation is

    ˆθBGE=[ωˆθq+(1ω)00(θ)qL(x_|ξ,γ)π(ξ,γ)dξdγ00L(x_|ξ,γ)π(ξ,γ)dξdγ]1q. (4.9)

    From Eqs (4.7)–(4.9), it should be observed that all Bayesian estimators are expressed as a ratio of two integrals, which cannot be simplified or directly computed. As a result, we compute the estimates using the Lindley method.

    Lindley [37] proposed this method to approximate the ratio of two integrals, which approaches the ratio of the integrals as a whole and yields a single numerical value. The approximate BEs of ξ,γ,ψ(x) and ψt(x) are computed using the Lindley method in this subsection. The Lindley method can be expressed in general cases as follows:

    ˆu=u(ˆξ,ˆγ)+12mi,j=1[uij(ˆξ,ˆγ)+2ui(ˆξ,ˆγ)ρj(ˆξ,ˆγ)]ˆσij+12mi,j,k,l=1[ˆσijˆσklˆlijkˆuk(ˆξ,ˆγ)], (4.10)

    where (i,j,k,l)=1,2,...,m and ˆξ and ˆγ are the ML estimators of ξ and γ, respectively.

    Also, ui(ξ)=uξ; uij(ξ,γ)=2uξγ; lijk(ξ,γ,γ)=3u(ξ,γ)ξγγ;

    ρ=logπ(ξ,γ) is the logarithm of the joint prior density function ρj(ξ,γ)=ργ;

    (σij)N×N=(2lξγ)1, where l is the likelihood function. Note that σij=(i,j)th elements of the Fisher information matrix (2lξγ)1.

    For the two-parameter case, the Lindley method is given by

    ˆu=u(ˆξ,ˆγ)+12[(u11+2u1ρ1)σ11+(u12+2u1ρ2)σ12+(u21+2u2ρ1)σ21+(u22+2u2ρ2)σ22]+12[(u1σ11+u2σ12)(l111σ11+l121σ12+l211σ21+l221σ22)+(u1σ21+u2σ22)(l211σ11+l122σ12+l212σ21+l222σ22)]], (4.11)

    where, u1=uξ; u12=2uξγ; l122=3uξγγ; ρ2=ργ; σ12 is the (1,2)th element of the inverse of the Fisher information matrix. The details and derivative can be found in Appendix A.

    In this part, we look into the efficiency of the ML estimates (MLEs) and BEs of ξ, γ, ψ(x) and ψt(x) for the EGD in terms of the mean squared error (MSE) under different BLOFs.

    ● For given hyperparameters a1,b1,a2 and b2, generate random values of ξ and γ.

    ● Making use of ξ and γ obtained in the previous step, we generate a sample of upper-ordered values from an EGD of size n.

    ● The MLE of ξ,γ,ψ(x) and ψt(x) has been computed for different values of r, k and T, according to Section 3.

    ● The BE of ξ,γ,ψ(x) and ψt(x), based on the BSEL, BLN, and BGE loss functions using the Lindley method, has been provided, respectively, for different values of r, k and T, according to Section 4.

    ● The MSE over N samples is provided by Eq (5.1), if ˆθ is an estimate of θ.

    MSE(ˆθ)=Ni=1(ˆθiθ)2N, (5.1)

    where ˆθ=(ˆξ,ˆγ,ˆψ(x) and ^ψt(x)).

    ● The steps that came before are repeated N=1000 times to generate a sample from the EGD with the hyperparameters (a1=2,b1=4,a2=1.5), ω=0.5 and t=0.4.

    ● The shape parameter q is selected as q=(0.6,0.6), and the performance of the BLN and BGE loss functions varies depending on the value of q.

    ● The true selected values of the parameters are ξ=0.7 and γ=2.3, and the true values of ψ(x)=0.283279 and ψt(x)=0.274054.

    ● The MLEs and BEs of ξ,γ, ψ(x) and ψt(x) were studied under the following conditions:

    1- Values of n,r,k are taken as (n=250,r=230,k=150) and (n=150,r=120,k=80) at different values of T, where T=(0.8,1.5,3) (see Table 1).

    Table 1.  MLE and BE results for ξ,γ, ψ(x) and ψt(x) based on the GT-Ⅰ HCS with BLOFs using different values of T, along with the corresponding MSE in each case.
    Estimate
    n (r,k) T MLE BSEL BLN BGE
    q=(0.6) q=(0.6) q=(0.6) q=(0.6)
    150 (120, 80) 0.8 2.83027 2.85351 2.86904 2.86904 2.84977 2.83845
    γ 1.5 1.60496 1.62274 1.6297 1.6297 1.6198 1.61089
    3 0.7071 0.71762 0.71955 0.71955 0.71579 0.7102
    250 (230,150) 0.8 3.65544 3.66824 3.67906 3.67906 3.66625 3.66024
    1.5 2.87996 2.89194 2.8999 2.8999 2.89008 2.88447
    3 1.77925 1.78916 1.79321 1.79321 1.78763 1.78302
    150 (120, 80) 0.8 0.5927 0.59829 0.59901 0.59756 0.5975 0.59509
    ξ 1.5 0.4158 0.41982 0.42016 0.41948 0.41929 0.41768
    3 0.2803 0.28295 0.28308 0.28281 0.28262 0.28164
    250 (230,150) 0.8 0.6955 0.69893 0.69948 0.69838 0.69842 0.69687
    1.5 0.5842 0.58719 0.58757 0.5868 0.58675 0.58544
    3 0.4225 0.42483 0.42503 0.42464 0.4245 0.42361
    150 (120, 80) 0.8 -0.323 -0.32656 -0.32616 -0.32695 -0.32573 -0.3232
    ψ(x) 1.5 -0.1442 -0.14723 -0.14708 -0.14737 -0.14654 -0.1444
    3 -0.0414 -0.04279 -0.04278 -0.04281 -0.0425 -0.04158
    250 (230,150) 0.8 -0.44481 -0.44599 -0.44574 -0.44624 -0.44561 -0.4444
    1.5 -0.3284 -0.3301 -0.32989 -0.33031 -0.32967 -0.32837
    3 -0.1621 -0.16383 -0.16374 -0.16393 -0.16345 -0.1622
    150 (120, 80) 0.8 -0.2546 -0.2876 -0.28743 -0.28777 -0.2872 -0.28602
    ψt(x) 1.5 -0.215 -0.37582 -0.37561 -0.37604 -0.37544 -0.3743
    3 -0.0996 -0.41214 -0.41185 -0.41242 -0.41168 -0.4102
    250 (230,150) 0.8 -0.236488 -0.23691 -0.23683 -0.23699 -0.23669 -0.2360
    1.5 -0.2432 -0.27686 -0.27677 -0.27696 -0.27664 -0.27599
    3 -0.2096 -0.34693 -0.34682 -0.34705 -0.34671 -0.3460
    MSE
    150 (120, 80) 0.8 0.343 0.369 0.38707 0.35091 0.36488 0.352
    γ 1.5 0.4995 0.47527 0.46603 0.48482 0.47927 0.49149
    3 0.24189 0.20858 0.20252 0.21473 0.21439 0.23211
    250 (230,150) .8 1.888 1.92318 1.95316 1.89299 1.91773 1.9013
    1.5 0.36211 0.37627 0.38588 0.36664 0.37407 0.36748
    3 0.27955 0.26939 0.26531 0.27355 0.27095 0.2757
    150 (120, 80) 0.8 0.1766 0.17453 0.17441 0.17464 0.17479 0.17561
    ξ 1.5 0.08317 0.081 0.08082 0.08118 0.08129 0.08217
    3 0.0197 0.01882 0.01872 0.01892 0.01895 0.01936
    250 (230,150) 0.8 0.0783 0.07708 0.07697 0.07718 0.07724 0.0777
    1.5 0.0175 0.01694 0.01686 0.01701 0.01703 0.0173
    3 0.00801 0.0081 0.00814 0.00808 0.00809 0.0080
    150 (120, 80) 0.8 0.0585 0.05793 0.05794 0.05792 0.05807 0.05851
    ψ(x) 1.5 0.02003 0.01922 0.01926 0.01918 0.01941 0.01998
    3 0.00407 0.00428 0.00425 0.00432 0.00422 0.00404
    250 (230,150) 0.8 0.0277 0.02808 0.028 0.02816 0.02797 0.02761
    1.5 0.0151 0.01469 0.01471 0.01467 0.01478 0.01506
    3 0.00328 0.00342 0.0034 0.00344 0.00338 0.00327
    150 (120, 80) 0.8 0.1182 0.00171 0.00172 0.00169 0.00174 0.00182
    ψt(x) 1.5 0.0534 0.00501 0.00503 0.00498 0.00506 0.00521
    3 0.0376 0.02457 0.02462 0.02451 0.02469 0.02506
    250 (230,150) 0.8 0.04396 0.04284 0.04287 0.04281 0.04293 0.0432
    1.5 0.04099 0.02786 0.02789 0.02783 0.02793 0.02815
    3 0.02516 0.00949 0.00951 0.00947 0.00953 0.00966

     | Show Table
    DownLoad: CSV

    2- Values of n,r,T are taken as (n=150,r=120,T=3) at different values of k, where k=(60,80,100) (see Table 2).

    Table 2.  MLE and BE results for ξ,γ, ψ(x) and ψt(x) based on the GT-Ⅰ HCS with BLOFs for different values of k at T=3 and t=0.4, along with the corresponding MSE in each case.
    Estimate
    (n,r) k MLE BSEL BLN BGE
    q=(0.6) q=(0.6) q=(0.6) q=(0.6)
    (150,120) 60 1.80553 1.86432 1.88962 1.88962 1.85457 1.8243
    γ 80 2.20961 2.24898 2.27005 2.27005 2.24241 2.22236
    100 2.66151 2.69121 2.71024 2.71024 2.68632 2.6714
    (250,230) 150 2.40402 2.42421 2.43617 2.43617 2.42083 2.41061
    180 2.86144 2.8777 2.88897 2.88897 2.87504 2.867
    210 3.3904 3.26401 3.27118 3.27118 3.26276 3.2592
    (150,120) 60 0.470143 0.47942 0.48014 0.47869 0.47842 0.47535
    ξ 80 0.517534 0.52482 0.52554 0.52409 0.52391 0.52117
    100 0.571024 0.57731 0.57807 0.57655 0.57645 0.57385
    (250,230) 150 0.535 0.53987 0.54029 0.53945 0.53935 0.5377
    180 0.595 0.59921 0.59968 0.59875 0.5987 0.59716
    210 0.66122 0.61358 0.6134 0.61377 0.61378 0.61437
    (150,120) 60 -0.180905 -0.18981 -0.18928 -0.19032 -0.18791 -0.18198
    ψ(x) 80 -0.235081 -0.2411 -0.24061 -0.24158 -0.23971 -0.23548
    100 -0.298506 -0.30274 -0.30228 -0.3032 -0.30168 -0.29848
    (250,230) 150 -0.2616 -0.26478 -0.26449 -0.26506 -0.26403 -0.26179
    180 -0.329 -0.3313 -0.33102 -0.33158 -0.33072 -0.32899
    210 -0.4064 -0.38103 -0.38096 -0.38111 -0.38092 -0.3806
    (150,120) 60 -0.24894 -0.37997 -0.37956 -0.38039 -0.37924 -0.37711
    ψt(x) 80 -0.256614 -0.33925 -0.33896 -0.33954 -0.33869 -0.33703
    100 -0.255107 -0.2996 -0.2994 -0.29981 -0.29914 -0.29778
    (250,230) 150 -0.254 -0.31951 -0.31936 -0.31966 -0.31921 -0.31829
    180 -0.253495 -0.28575 -0.28564 -0.28586 -0.28549 -0.28473
    210 -0.2427 -0.2437 -0.24372 -0.24386 -0.24361 -0.2431
    MSE
    (150,120) 60 0.403425 0.35218 0.33667 0.3703 0.36025 0.38655
    γ 80 0.233308 0.25632 0.27314 0.23966 0.25249 0.24117
    100 0.136536 0.13227 0.13348 0.13197 0.13285 0.13518
    (250,230) 150 0.375 0.33469 0.35833 0.33105 0.34162 0.32243
    180 0.07467 0.07956 0.08337 0.07594 0.07871 0.0763
    (150,120) 60 0.059 0.05533 0.05506 0.0556 0.05573 0.05699
    ξ 80 0.040 0.03831 0.03811 0.03851 0.03859 0.03946
    100 0.0255 0.02416 0.02404 0.0242 0.02434 0.02488
    (250,230) 150 0.0313 0.03014 0.03002 0.03026 0.03029 0.03076
    180 0.0166 0.01596 0.0158 0.01603 0.01605 0.01633
    (150,120) 60 0.01410 0.01242 0.0125 0.01235 0.01276 0.01387
    ψ(x) 80 0.00595 0.00538 0.0054 0.00535 0.0055 0.00589
    100 0.0036 0.00372 0.0037 0.00374 0.00369 0.00361
    (250,230) 150 0.00422 0.0044 0.00438 0.00443 0.00435 0.00421
    180 0.00253 0.00239 0.0024 0.00238 0.00242 0.00252
    (150,120) 60 0.0403 0.02115 0.02121 0.02109 0.02129 0.02169
    ψt(x) 80 0.0369 0.01155 0.01161 0.01149 0.01167 0.01203
    100 0.0325 0.00558 0.00563 0.00553 0.00568 0.00598
    (250,230) 150 0.037 0.02505 0.02508 0.02501 0.02513 0.02537
    180 0.0362 0.01558 0.01562 0.01555 0.01566 0.01589

     | Show Table
    DownLoad: CSV

    3- Values of n,r,T are taken as (n=250,r=230,T=3) at different values of k, where k=(150,180,210) (see Table 2).

    4- Values of n,k,T are taken as (n=150,k=80,T=3) for different values of r, where r=(90,110,130) (see Table 3).

    Table 3.  MLE and BE results for ξ,γ, ψ(x) and ψt(x) based on the GT-Ⅰ HCS with BLOFs using different values of r at T=3, along with the corresponding MSE in each case.
    Estimate
    n (T,k) r MLE BSEL BLN BGE
    q=(0.6) q=(0.6) q=(0.6) q=(0.6)
    150 (3, 80) 90 2.2218 2.46768 2.48757 2.48757 2.46207 2.44501
    γ 110 2.94203 2.96886 2.98765 2.98765 2.9645 2.95129
    130 3.5292 3.55199 3.57051 3.57051 3.54841 3.53762
    250 (3,100) 150 2.55432 2.57294 2.58461 2.58461 2.56985 2.56049
    200 3.20296 3.21759 3.22876 3.22876 3.21523 3.20812
    230 3.764 3.7769 3.78811 3.78811 3.77489 3.76884
    150 (3, 80) 90 0.5449 0.5517 0.55243 0.55095 0.55082 0.5481
    ξ 110 0.6058 0.61191 0.61271 0.6111 0.61105 0.60845
    130 0.6864 0.6923 0.69323 0.69136 0.69142 0.68875
    250 (3,100) 150 0.5570 0.56083 0.56127 0.56039 0.56032 0.55876
    200 0.6356 0.63913 0.63963 0.63863 0.63862 0.63708
    230 0.7122 0.71567 0.71624 0.71509 0.71514 0.71355
    150 (3, 80) 90 -0.2666 -0.27169 -0.2712 -0.27216 -0.27048 -0.2668
    ψ(x) 110 -0.3393 -0.34284 -0.34238 -0.3433 -0.34191 -0.33911
    130 -0.42504 -0.42729 -0.42686 -0.42772 -0.4266 -0.4245
    250 (3,100) 150 -0.2839 -0.28669 -0.28641 -0.28698 -0.28602 -0.28397
    200 -0.3786 -0.38038 -0.3801 -0.38065 -0.37989 -0.37841
    230 -0.4608 -0.46199 -0.46173 -0.46224 -0.46162, -0.4605
    150 (3, 80) 90 -0.2579 -0.3195 -0.3192 -0.3197 -0.31903 -0.31754
    ψt(x) 110 -0.25143 -0.27931 -0.27913 -0.27949 -0.27889 -0.2776
    130 -0.2436 -0.24597 -0.24583 -0.24611 -0.24559 -0.24448
    250 (3,100) 150 -0.256085 -0.30852 -0.30839 -0.30865 -0.30824 -0.30738
    200 -0.245503 -0.26163 -0.26154 -0.26173 -0.2614 -0.26069
    230 -0.235102 -0.23235 -0.23228 -0.23243 -0.23213 -0.23148
    MSE
    150 (3, 80) 90 1.6028 1.65972 1.70734 1.6115 1.65079 1.623
    γ 110 0.5128 0.54865 0.57547 0.52169 0.54282 0.52541
    130 0.1239 0.13504 0.14408 0.12646 0.13312 0.1277
    250 (3,100) 150 1.1948 1.2329 1.26649 1.19921 1.22706 1.20926
    200 0.8691 0.89597 0.91694 0.87487 0.89165 0.87871
    230 0.1286 0.13868 0.1458 0.13169 0.1369 0.13202
    150 (3, 80) 90 0.0316 0.02982 0.02965 0.02998 0.03004 0.03074
    ξ 110 0.01856 0.01772 0.01764 0.0178 0.01783 0.01819
    130 0.01355 0.01374 0.01382 0.01366 0.01371 0.01362
    250 (3,100) 150 0.02571 0.02473 0.02462 0.02483 0.02486 0.02526
    200 0.01055 0.01021 0.01017 0.01024 0.01026 0.01041
    230 0.00839 0.0086 0.00865 0.00855 0.00856 0.00847
    150 (3, 80) 90 0.0230 0.02358 0.02346 0.0237 0.0234 0.02285
    ψ(x) 110 0.00652 0.00686 0.0068 0.00692 0.00676 0.00648
    130 0.003543 0.00335 0.00335 0.00335 0.00339 0.0035
    250 (3,100) 150 0.0331 0.03354 0.03345 0.03362 0.03341 0.03303
    200 0.0110 0.01135 0.0113 0.01141 0.01126 0.011
    230 0.00220 0.0022 0.00219 0.002 0.00219 0.0022
    150 (3, 80) 90 0.04203 0.03941 0.03946 0.03936 0.0395 0.04
    ψt(x) 110 0.03870 0.02727 0.02733 0.02721 0.02741 0.02783
    130 0.0363 0.01587 0.01593 0.01581 0.016 0.01637
    250 (3,100) 150 0.0444 0.0447 0.04473 0.04467 0.04479 0.04507
    200 0.0402 0.0331 0.0332 0.03314 0.03326 0.03352
    230 0.0363 0.01842 0.0184 0.01838 0.0185 0.01873

     | Show Table
    DownLoad: CSV

    5- Values of n,k,T are taken as (n=250,k=100,T=3) for different values of r, where r=(150,180,230) (see Table 3).

    6- The simulation results are listed in Tables 13 and illustrated in Figures 27.

    Figure 2.  MSE of MLEs and BEs for the residual extropy for different values of r.
    Figure 3.  MSE of MLEs and BEs for extropy for various values of r.
    Figure 4.  MSE of MLEs and BEs for residual extropy for different values of k.
    Figure 5.  MSE of the MLEs and BEs for extropy for different values of k.
    Figure 6.  The BEs of residual extropy with BLOFs.
    Figure 7.  The BEs of extropy with BLOFs.

    Here are some observations on the MLEs and BEs of the extropy and residual extropy results displayed in Tables 13.

    ● The MSEs of the MLEs and BEs decrease when n increases.

    ● MSEs of the MLEs and BEs of extropy and residual extropy decrease as r increases with fixed n,k,T (Figures 2 and 3).

    ● The BE of ψBLN(x) at q=0.6 and ψ(t)BLN at q=0.6 is favored over the others in terms of having the lowest MSE for different values of T, resulting in reduced variability.

    ● MSEs of the MLEs and BEs of extropy and residual extropy decrease as k increases with fixed n,r,T (Figures 4 and 5).

    ● The MSE values show that, in most cases, the BEs of extropy are best under the BGE loss function, whereas the BEs of residual extropy are best under the BLN loss function.

    ● The BEs of extropy increase by increasing the number of failures r or k. Additionally, as demonstrated in Figures 6 and 7, the BEs of residual extropy decrease as the number of failures r or k increases.

    ● The BE of extropy and its residual yields a smaller value than the MLE.

    ● The BEs of ψ(x) and ψt(x), viz., the BLN loss function at q=0.6, have a lot of information and the BEs using the BGE loss function at q=0.6 have a lot of information since they have a low level of uncertainty.

    These data were used in [38], and they represented the daily average wind speeds from January 1,2009 to October 4,2009 for Cairo city. The National Climatic Data Center in Asheville, NC, United States of America produced this information and recorded it as follows:

    2.7,3.1,3.2,3.2,3.3,3.5,3.5,3.8,3.8,3.8,4.2,4.2,4.3,4.3,4.3,4.4,4.5,4.7,4.7,4.8,4.9,4.9,4.9,4.9,5,5,5.1,5.2,5.2,5.3,5.4,5.4,5.4,5.4,5.5,5.5,5.6,5.6,5.6,5.7,5.7,5.7,5.8,5.8,6,6.1,6.3,6.4,6.6,6.7,6.7,6.8,6.8,6.8,6.8,6.9,7.1,7.3,7.3,7.3,7.4,7.5,7.6,7.6,7.7,7.8,7.9,8,8,8.2,8.2,8.6,8.7,8.8,8.9,9.3,9.3,9.4,9.4,9.4,9.5,9.6,9.8,9.8,9.9,10,10.1,10.3,10.6,10.7,11.1,11.3,12,12.2,12.4,12.5,13.3,13.8,14.4,14.7.

    The Kolmogorov-Smirnov (K-S) test was used to determine if the data distribution is an EGD or not. The calculated value of the K-S distance is 0.0808528, and the P-value is 0.504649. Figure 8 shows the estimated pdf and cdf.

    Figure 8.  EGD for real data with the estimated pdf and cdf.

    Now, let us examine what occurs if the data set is censored. Using the uncensored data set, we produce three artificial GT-Ⅰ HCS sets in the ways described below (see Table 4):

    Table 4.  The MLEs and BEs under the GT-Ⅰ HCS.
    n (r,k) T MLE BSEL BLN BGE
    q=(0.6) q=(0.6) q=(0.6) q=(0.6)
    γ 100 (95, 85) 9 0.5341 0.5376 0.538 0.5373 0.5372 0.536
    Case Ⅰ ξ 4.9941 4.968 5.1480 4.793 4.943 4.8705
    ψ(x) -0.0564 -0.05813 -0.05812 -0.05814 -0.05809 -0.05794
    ψt(x) -0.4012 -0.4093 -0.4093 -0.4093 -0.4093 -0.4094
    γ 100 (95, 85) 13 0.5333 0.5366 0.5369 0.5363 0.5362 0.5351
    Case Ⅱ ξ 4.9769 4.9560 5.1205 4.7954 4.9332 4.8665
    ψ(x) -0.0563 -0.05793 -0.05792 -0.0579 -0.0578 -0.0577
    ψt(x) -0.4011 -0.4087 -0.4087 -0.4087 -0.4088 -0.4088
    γ 100 (95, 85) 11 0.5427 0.5461 0.5464 0.5458 0.5457 0.5446
    Case Ⅲ ξ 5.1743 5.1518 5.3343 4.9739 5.1273 5.0558
    ψ(x) -0.0573 -0.0574 -0.0574 -0.0574 -0.0574 -0.0572
    ψt(x) -0.3025 -0.3998 -0.3998 -0.3998 -0.3998 -0.3997

     | Show Table
    DownLoad: CSV

    Case Ⅰ: T=9,k=85,r=95; therefore, D=85,c=xk=10.

    Case Ⅱ: T=13,k=85,r=95; therefore, D=95,c=xr=12.5.

    Case Ⅲ: T=11,k=85,r=95; therefore, D=92,c=T=11.

    We employed ML and Bayesian estimation of extropy and residual extropy in these cases. Using BLOFs (BSEL, BLN, BGE) with ω=0.5 and q=(0.6,0.6), we employed the Lindley method. We employed a non-informative prior to calculate the BEs because we have no knowledge of the priors; thus, we chose a1=0,b1=0,a2=0 and b2=0.

    We note from the study of this application that the BE of extropy and its residual yields a smaller value than the MLE. The BE of extropy and its residual via BLN and BGE loss functions at q=0.6 takes a large value compared to their values at q=0.6. Finally, we reach the conclusion that the simulated research is supported by real data.

    We have investigated extropy as a supplementary dual of entropy and an alternative measure of uncertainty in this paper, as well as investigating residual extropy as a measure of residual uncertainty of a non-negative random variable for the EGD. The maximum likelihood and Bayesian estimation of the parameters, extropy and residual extropy for the EGD under the GT-Ⅰ HCS are discussed in this paper. The BE of extropy and residual extropy for the EGD is derived based on BLOFs (BSEL, BLN, BGE). In terms of their MSE, the Lindley method is used to determine the BEs of extropy and residual extropy with BSEL, BLN, and BGE loss functions. Application to real-world data is available.

    In general, the MSE values decrease as the number of failures rises, according to the results of the study. When compared to different estimates, the BE of residual extropy under the BLN loss function performed well, and the extropy under the BGE loss function performed well in the majority of situations. By increasing the number of failures r or k, the BEs of extropy are raised. Additionally, increasing the number of failures r or k decreases the BEs of residual extropy. From the application result for a positive value of q, the BE values for extropy and its residual using the BLN and BGE loss functions are larger than the opposite for a negative value of q. Finally, we have highlighted that data outputs and simulations are significant.

    The authors declare that they have not used artificial intelligence tools in the creation of this article.

    This research work was funded by Institutional Fund Projects under grant no. (IFPIP: 549-150-1443). The authors gratefully acknowledge the technical and financial support provided by the Ministry of Education and King Abdulaziz University, DSR, Jeddah, Saudi Arabia.

    The authors declare no conflict of interest.

    The Fisher information matrix is presented here as follows:

    (σij)N×N=(2lξγ)1, where l is the likelihood function and this matrix is given by

    σij=[2lξ22lξγ2lγξ2lγ2]1(ˆξ,ˆγ);

    using Eqs (3.5) and (3.6), we have

    2l2ξ=Dξ2(nD)log[1(1+γc)eγc]2[1(1+γc)eγc]2ξ(1[1(1+γc)eγc]ξ)2(nD)log[1(1+γc)eγc]2[1(1+γc)eγc]ξ1[1(1+γc)eγc]ξ,
    2l2γ=2Dγ2(nD)(1(1+γc)eγc)ξ1[c2eγcc3γecγ]ξ1[1(1+γc)eγc]ξξ(ξ1)(nD)(c2γecγ)2(1(1+cγ)ecγ)ξ21[1(1+γc)eγc]ξ(nD)ξ2(c2γecγ)2(1(1+cγ)ecγ)2+2ξ)(1[1(1+γc)eγc]ξ)2+(ξ1)Di=1((γx2ieγxi)2(1(1+γxi)eγxi)2+eγxix2iγx3ieγxi1(1+γxi)eγxi),

    and

    2lξγ=(nD)c2γe(cγ)(1(1+cγ)e(cγ))(1+ξ)1[1(1+γc)eγc]ξ(nD)ξ(c2γecγ)(1(1+cγ)ecγ)(1+2ξ))log(1(1+cγ)ecγ(1[1(1+γc)eγc]ξ)2(nD)ξc2γecγ[1(1+cγ)ecγ](1+ξ)log[1(1+cγ)ecγ]1[1(1+γc)eγc]ξ+Di=1γx2ieγxi1(1+γxi)eγxi=2lγξ.


    [1] B. Epstein, Truncated life tests in the exponential case, Ann. Math. Stat., 25 (1954), 555–564. http://dx.doi.org/10.1214/aoms/1177728723 doi: 10.1214/aoms/1177728723
    [2] A. Childs, B. Chandrasekar, N. Balakrishnan, D. Kundu, Exact likelihood inference based on Type-Ⅰ and Type-Ⅱ hybrid censored samples from the exponential distribution, Ann. Inst. Stat. Math., 55 (2003), 319–330. https://doi.org/10.1007/BF02530502 doi: 10.1007/BF02530502
    [3] B. Chandrasekar, A. Childs, N. Balakrishnan, Exact likelihood inference for the exponential distribution under generalized Type-Ⅰ and Type-Ⅱ hybrid censoring, Nav. Res. Logist., 51 (2004), 994–1000. https://doi.org/10.1002/nav.20038 doi: 10.1002/nav.20038
    [4] C. E. Shannon, A mathematical theory of communication, Bell Syst. Tech., 27 (1948), 379–432. https://doi.org/10.1002/j.1538-7305.1948.tb01338.x doi: 10.1002/j.1538-7305.1948.tb01338.x
    [5] Y. Cho, H. Sun, K. Lee, An estimation of the entropy for a Rayleigh distribution based on doubly-generalized Type-Ⅱ hybrid censored samples, Entropy, 16 (2014), 3655–3669. https://doi.org/10.3390/e16073655 doi: 10.3390/e16073655
    [6] S. Liu, W. Gui, Estimating the entropy for Lomax distribution based on generalized progressively hybrid censoring, Symmetry, 11 (2019), 1219. https://doi.org/10.3390/sym11101219 doi: 10.3390/sym11101219
    [7] A. S. Hassan, A. N. Zaky, Estimation of entropy for inverse Weibull distribution under multiple censored data, J. Taibah Univ. Sci., 13 (2019), 331–337. https://doi.org/10.1080/16583655.2019.1576493 doi: 10.1080/16583655.2019.1576493
    [8] J. Yu, W. Gui, Y. Shan, Statistical inference on the Shannon entropy of inverse Weibull distribution under the progressive first-failure censoring, Entropy, 21 (2019), 1209. https://doi.org/10.3390/e21121209 doi: 10.3390/e21121209
    [9] A. A. H. Ahmadini, A. S. Hassan, A. N. Zaki, S. S. Alshqaq, Bayesian inference of dynamic cumulative residual entropy from Pareto Ⅱ distribution with application to COVID-19, AIMS Math., 6 (2020), 2196–2216. https://doi.org/10.3934/math.2021133 doi: 10.3934/math.2021133
    [10] A. A. Al-Babtain, A. S. Hassan, A. N. Zaky, I. Elbatal, M. Elgarhy, Dynamic cumulative residual Renyi entropy for Lomax distribution: Bayesian and non-Bayesian methods, AIMS Math., 6 (2021), 3889–3914. https://doi.org/10.3934/math.2021231 doi: 10.3934/math.2021231
    [11] A. S. Hassan, A. N. Zaky, Entropy Bayesian estimation for Lomax distribution based on record, Thailand Stat., 19 (2021), 96–115.
    [12] A. I. Al-Omari, A. S. Hassan, H. F. Nagy, A. R. Al-Anzi, L. Alzoubi, Entropy Bayesian analysis for the generalized inverse exponential distribution based on URRSS, Comput. Mater. Contin., 69 (2021), 3795–3811. https://doi.org/10.32604/cmc.2021.019061 doi: 10.32604/cmc.2021.019061
    [13] A. M. Almarashi, A. Algarni, A. S. Hassan, A. N. Zaky, M. Elgarhy, Bayesian analysis of dynamic cumulative residual entropy for Lindley distribution, Entropy, 23 (2021), 1256. https://doi.org/10.3390/e23101256 doi: 10.3390/e23101256
    [14] A. S. Hassan, E. A. Elsherpieny, R. E. Mohamed, Estimation of information measures for power-function distribution in the presence of outliers and their applications, Int. J. Inf. Commun. Technol., 21 (2022), 1–25. https://doi.org/10.32890/jict2022.21.1.1 doi: 10.32890/jict2022.21.1.1
    [15] B. A. Helmy, A. S. Hassan, A. K. El-Kholy, Analysis of uncertainty measure using unified hybrid censored data with applications, J. Taibah Univ. Sci., 15 (2022), 1130–1143. https://doi.org/10.1080/16583655.2021.2022901 doi: 10.1080/16583655.2021.2022901
    [16] A. S. Hassan, E. A. Elsherpieny, R. E. Mohamed, Classical and Bayesian estimation of entropy for Pareto distribution in presence of outliers with application, Sankhya A, 85 (2023), 707–740. https://doi.org/10.1007/s13171-021-00274-z doi: 10.1007/s13171-021-00274-z
    [17] D. Ellerman, An introduction to logical entropy and its relation to Shannon entropy, Int. J. Semant. Comput., 7 (2013), 121–145. https://doi.org/10.48550/arXiv.2112.01966 doi: 10.48550/arXiv.2112.01966
    [18] D. Markechová, B. Riečan, Logical entropy of fuzzy dynamical systems, Entropy, 18 (2016), 157. https://doi.org/10.3390/e18040157 doi: 10.3390/e18040157
    [19] S. Boffa, D. Ciucci, Logical entropy and aggregation of fuzzy orthopartitions, Fuzzy Set. Syst., 455 (2023), 77–101. https://doi.org/10.1016/j.fss.2022.07.014 doi: 10.1016/j.fss.2022.07.014
    [20] F. Lad, G. Sanfilippo, G. Agro, Extropy: Complementary dual of entropy, Stat. Sci., 30 (2015), 40–58. https://doi.org/10.48550/arXiv.1109.6440 doi: 10.48550/arXiv.1109.6440
    [21] T. Gneiting, A. E. Raftery, Strictly proper scoring rules, prediction and estimation, J. Am. Stat. Assoc., 102 (2007), 359–378. https://doi.org/10.1198/016214506000001437 doi: 10.1198/016214506000001437
    [22] S. Furuichi, F. C. Mitroi, Mathematical inequalities for some divergences, Physica A, 391 (2012), 388–400. https://doi.org/10.48550/arXiv.1104.5603 doi: 10.48550/arXiv.1104.5603
    [23] G. Qiu, The extropy of order statistics and record values, Stat. Probab. Lett., 120 (2017), 52–60. https://doi.org/10.1016/j.spl.2016.09.016 doi: 10.1016/j.spl.2016.09.016
    [24] G. Qiu, K. Jia, The residual extropy of order statistics, Stat. Probab. Lett., 133 (2018), 15–22. https://doi.org/10.1016/j.spl.2017.09.014 doi: 10.1016/j.spl.2017.09.014
    [25] M. Z. Raqab, G. Qiu, On extropy properties of ranked set sampling, Am. J. Theor. Appl. Stat., 53 (2019), 210–226. https://doi.org/10.1080/02331888.2018.1533963 doi: 10.1080/02331888.2018.1533963
    [26] H. A. Noughabi, J. Jarrahiferiz, On the estimation of extropy, J. Nonparametr. Stat., 31 (2019), 88–99. https://doi.org/10.1080/10485252.2018.1533133 doi: 10.1080/10485252.2018.1533133
    [27] R. Hazeb, M. Z. Raqab, H. A. Bayoud, Non-parametric estimation of the extropy and the entropy measures based on progressive type-Ⅱ censored data with testing uniformity, J. Stat. Comput. Simul., 91 (2021), 1–33. https://doi.org/10.1080/00949655.2021.1888953 doi: 10.1080/00949655.2021.1888953
    [28] A. S. Hassan, E. Elsherpieny, R. Mohamed, Cumulative residual extropy for Pareto distribution in the presence of outliers: Bayesian and non-Bayesian methods, Stat. Optim. Inf. Comput., 10 (2022), 1095–1109. https://doi.org/10.19139/soic-2310-5070-1200 doi: 10.19139/soic-2310-5070-1200
    [29] R. C. Gupta, P. L. Gupta, R. D. Gupta, Modeling failure time data by Lehman alternatives, Commun. Stat. Theor. M., 27 (1998), 887–904. https://doi.org/10.1080/03610929808832134 doi: 10.1080/03610929808832134
    [30] A. I. Shawky, R. A. Bakoban, Bayesian and non-Bayesian estimations on the exponentiated gamma distribution, Appl. Math. Sci., 2 (2008), 2521–2530.
    [31] A. I. Shawky, R. A. Bakoban, Order statistics from exponentiated gamma distribution and associated inference, Int. J. Contemp. Math. Sci., 4 (2009), 71–91.
    [32] N. Feroze, M. Aslam, Bayesian analysis of exponentiated gamma distribution under type Ⅱ censored samples, Sci. J. Pure. Appl. Sci., 49 (2012), 30–39.
    [33] U. Singh, S. K. Singh, A. S. Yadav, Bayesian estimation for exponentiated gamma distribution under progressive type-Ⅱ censoring using different approximation techniques, Data Sci. J., 13 (2015), 551–568. https://doi.org/10.6339/JDS.201507_13(3).0008 doi: 10.6339/JDS.201507_13(3).0008
    [34] M. A. W. Mahmoud, L. S. Diab, M. G. M. Ghazal, A. H. Baria, Bayesian prediction of exponentiated gamma distribution based on unified hybrid censored data, J. Stat. : Adv. Theory Appl., 22 (2019), 21–43. https://doi.org/10.18642/jsata_7100122102 doi: 10.18642/jsata_7100122102
    [35] M. A. W. Mahmoud, L. S. Diab, M. G. M. Ghazal, A. H. Baria, On study of exponentiated gamma distribution based on unified hybrid censored data, Al-Azhar Bull. Sci., 30 (2019), 13–27. https://doi.org/10.21608/absb.2019.86749 doi: 10.21608/absb.2019.86749
    [36] A. Zellner, Bayesian and non-Bayesian estimation using balanced loss functions, Springer, New York, 1994. https://doi.org/10.1007/978-1-4612-2618-5_28
    [37] D. V. Lindley, Approximate Bayesian methods, Trab. Estad. Invest. Oper., 31 (1980), 223–245. https://doi.org/10.1007/BF02888353 doi: 10.1007/BF02888353
    [38] M. G. M. Ghazal, H. M. Hasaballah, Exponentiated Rayleigh distribution: A Bayes study using MCMC approach based on unified hybrid censored data, J. Adv. Math., 12 (2017), 6863–6880. https://doi.org/10.24297/jam.v12i12.4599 doi: 10.24297/jam.v12i12.4599
  • This article has been cited by:

    1. O. E. Abo-Kasem, A. Abdelgaffar, Aned Al Mutairi, Rana H. Khashab, Wael S. Abu El Azm, Classical and Bayesian estimation for Gompertz distribution under the unified hybrid censored sampling with application, 2023, 13, 2158-3226, 10.1063/5.0174543
    2. Amal S. Hassan, Najwan Alsadat, Oluwafemi Samson Balogun, Baria A. Helmy, Bayesian and non-Bayesian estimation of some entropy measures for a Weibull distribution, 2024, 9, 2473-6988, 32646, 10.3934/math.20241563
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1589) PDF downloads(104) Cited by(2)

Figures and Tables

Figures(8)  /  Tables(4)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog