
Continuous developments in unit interval distributions have shown effectiveness in modeling proportional data. However, challenges persist in diverse dispersion characteristics in real-world scenarios. This study introduces the unit logistic-exponential (ULE) distribution, a flexible probability model built upon the logistic-exponential distribution and designed for data confined to the unit interval. The statistical properties of the ULE distribution were studied, and parameter estimation through maximum likelihood estimation, Bayesian methods, maximum product spacings, and least squares estimates were conducted. A thorough simulation analysis using numerical techniques such as the quasi-Newton method and Markov chain Monte Carlo highlights the performance of the estimation methods, emphasizing their accuracy and reliability. The study reveals that the ULE distribution, paired with tools like randomized quantile and Cox-Snell residuals, provides robust assessments of goodness of fit, making it well-suited for real-world applications. Key findings demonstrate that the unit logistic-exponential distribution captures diverse data patterns effectively and improves reliability assessment in practical contexts. When applied to two real-world datasets—one from the medical field and the other from the economic sector—the ULE distribution consistently outperforms existing unit interval models, showcasing lower error rates and enhanced flexibility in tail behavior. These results underline the distribution's potential impact in areas requiring precise proportions modeling, ultimately supporting better decision-making and predictive analyses.
Citation: Hanan Haj Ahmad, Kariema A. Elnagar. A novel quantile regression for fractiles based on unit logistic exponential distribution[J]. AIMS Mathematics, 2024, 9(12): 34504-34536. doi: 10.3934/math.20241644
[1] | Abdulhakim A. Al-Babtain, Amal S. Hassan, Ahmed N. Zaky, Ibrahim Elbatal, Mohammed Elgarhy . Dynamic cumulative residual Rényi entropy for Lomax distribution: Bayesian and non-Bayesian methods. AIMS Mathematics, 2021, 6(4): 3889-3914. doi: 10.3934/math.2021231 |
[2] | Mustafa M. Hasaballah, Oluwafemi Samson Balogun, M. E. Bakr . Frequentist and Bayesian approach for the generalized logistic lifetime model with applications to air-conditioning system failure times under joint progressive censoring data. AIMS Mathematics, 2024, 9(10): 29346-29369. doi: 10.3934/math.20241422 |
[3] | Mohamed S. Eliwa, Essam A. Ahmed . Reliability analysis of constant partially accelerated life tests under progressive first failure type-II censored data from Lomax model: EM and MCMC algorithms. AIMS Mathematics, 2023, 8(1): 29-60. doi: 10.3934/math.2023002 |
[4] | Samah M. Ahmed, Abdelfattah Mustafa . Estimation of the coefficients of variation for inverse power Lomax distribution. AIMS Mathematics, 2024, 9(12): 33423-33441. doi: 10.3934/math.20241595 |
[5] | Rashad M. EL-Sagheer, Mohamed S. Eliwa, Khaled M. Alqahtani, Mahmoud El-Morshedy . Bayesian and non-Bayesian inferential approaches under lower-recorded data with application to model COVID-19 data. AIMS Mathematics, 2022, 7(9): 15965-15981. doi: 10.3934/math.2022873 |
[6] | M. G. M. Ghazal . Modified Chen distribution: Properties, estimation, and applications in reliability analysis. AIMS Mathematics, 2024, 9(12): 34906-34946. doi: 10.3934/math.20241662 |
[7] | Ahmed Elshahhat, Refah Alotaibi, Mazen Nassar . Statistical inference of the Birnbaum-Saunders model using adaptive progressively hybrid censored data and its applications. AIMS Mathematics, 2024, 9(5): 11092-11121. doi: 10.3934/math.2024544 |
[8] | Monthira Duangsaphon, Sukit Sokampang, Kannat Na Bangchang . Bayesian estimation for median discrete Weibull regression model. AIMS Mathematics, 2024, 9(1): 270-288. doi: 10.3934/math.2024016 |
[9] | Refah Alotaibi, Mazen Nassar, Zareen A. Khan, Ahmed Elshahhat . Analysis of Weibull progressively first-failure censored data with beta-binomial removals. AIMS Mathematics, 2024, 9(9): 24109-24142. doi: 10.3934/math.20241172 |
[10] | Refah Alotaibi, Mazen Nassar, Zareen A. Khan, Ahmed Elshahhat . Statistical analysis of stress–strength in a newly inverted Chen model from adaptive progressive type-Ⅱ censoring and modelling on light-emitting diodes and pump motors. AIMS Mathematics, 2024, 9(12): 34311-34355. doi: 10.3934/math.20241635 |
Continuous developments in unit interval distributions have shown effectiveness in modeling proportional data. However, challenges persist in diverse dispersion characteristics in real-world scenarios. This study introduces the unit logistic-exponential (ULE) distribution, a flexible probability model built upon the logistic-exponential distribution and designed for data confined to the unit interval. The statistical properties of the ULE distribution were studied, and parameter estimation through maximum likelihood estimation, Bayesian methods, maximum product spacings, and least squares estimates were conducted. A thorough simulation analysis using numerical techniques such as the quasi-Newton method and Markov chain Monte Carlo highlights the performance of the estimation methods, emphasizing their accuracy and reliability. The study reveals that the ULE distribution, paired with tools like randomized quantile and Cox-Snell residuals, provides robust assessments of goodness of fit, making it well-suited for real-world applications. Key findings demonstrate that the unit logistic-exponential distribution captures diverse data patterns effectively and improves reliability assessment in practical contexts. When applied to two real-world datasets—one from the medical field and the other from the economic sector—the ULE distribution consistently outperforms existing unit interval models, showcasing lower error rates and enhanced flexibility in tail behavior. These results underline the distribution's potential impact in areas requiring precise proportions modeling, ultimately supporting better decision-making and predictive analyses.
Unit distributions can be obtained using various transformations of well-known continuous distributions, including the negative exponential function transformation. Unit distributions give more flexibility to the original distribution along the unit interval without adding new parameters. Several unit distributions have been used to model data for percentages in many areas such as biology, mortality, recovery rates, economics, health, and reliability analysis, among others. New distributions have been proposed on the unit interval, such as the unit generalized half normal by Korkmaz [1], unit inverse Gaussian by Ghitany et al. [2], unit Gamma distribution by Consul and Jain [3], unit Weibull distribution by Mazucheli et al. [4], unit Gompertz distribution by Mazucheli et al. [5], unit Omega distribution by Abd El-Monsef et al. [6], unit Burr-XII distribution by Korkmaz and Chesneau [7], the unit-half normal distribution by Bakouch et al. [8], and unit exponential Pareto distribution by Haj Ahmad et al. [9]. The unit-exponentiated half-logistic distribution, and the bounded power Lomax were introduced by Hassan et al. ([10] and [11], respectively). Fayomi et al. [12] and [13] studied the unit-exponentiated Lomax distribution and the unit–power Burr X distribution.
The exponential distribution is widely utilized for modeling real-life data, primarily due to its memoryless feature and analytical simplicity. However, its utility is somewhat constrained because of its constant hazard rate and decreasing density function. To address these limitations and enhance its flexibility, numerous researchers have developed various modifications of the exponential distribution; for example, Hassan et al. [14] introduced a new four-parameter extended exponential distribution. Recent extensions include the exponentiated exponential [15], beta exponential [16], transmuted generalized exponential [17], Kumaraswamy transmuted-G family of distributions by [18]. A new three-parameter extension of the exponential distribution was introduced by[19]; also, [20] studied a new method for generating distributions with an application to the exponential distribution. Alpha power exponential distribution was discussed by[21], and extended exponential distribution and its applications were discussed by [22]. The Marshall-Olkin alpha power family of distributions was studied by [23], the new three-parameter exponential distribution by [24], and finally, Topp-leone exponential distribution was studied by [25].
Lan and Leemis created a new generalized exponential distribution [26]. It was called the logistic-exponential (LE) distribution with explicit-form density and distribution functions. LE is a distribution with two parameters that possess constant, increasing, decreasing, bathtub, and upside-down bathtub failure rate shapes. The only disadvantage of this distribution is that it does not yield explicit-form expressions for the moments, so it must be computed numerically using software like Mathematica, R, or Matlab.
For the LE distribution, the cumulative distribution function (cdf) is given by
F(x)=1−(1+(eβx−1)θ)−1,x>0, | (1.1) |
with scale parameter β>0 and shape parameter θ>0. The probability density function (pdf) is given by
f(x)=θβeβx(eβx−1)θ−1(1+(eβx−1)θ)2,x>0. | (1.2) |
This paper focuses on a unique special case of the LE distribution, known as the unit-logistic-exponential distribution, confined to the unit interval. The ULE distribution is particularly valuable for its capability to model long-tailed characteristics observed in many real-world datasets. Beyond its utility in capturing long-tailed behavior, the ULE distribution also plays a crucial role in uncovering patterns and trends within the data. By examining the distribution's shape, one can derive insights into the underlying factors influencing the data, which can guide resource allocation decisions or improve the accuracy of future predictions.
Traditional regression methods typically assess differences in outcome variables between populations by focusing on the mean (such as in ordinary least squares regression) or by evaluating a population's average effect (as seen in logistic regression models), after accounting for other explanatory variables. Quantile regression, however, offers the flexibility to examine how the slopes of the regression line vary across different quantiles of the data distribution. For instance, while the median line might remain stable, the 90th quantile prediction line may show a significant upward trend, whereas the 10th quantile prediction line could indicate a notable downward trend.
The primary objective of the research is to explore the new ULE distribution, examine its statistical characteristics, and demonstrate its effectiveness by implementing it in two real-world data examples. The motivations for studying the ULE distribution are as follows: (ⅰ) it is a unique model because it is defined on the unit interval [0,1] instead of the positive real numbers; (ⅱ) it exhibits significant flexibility in tail behavior, making it useful in risk assessment with relatively better outcomes; and (ⅲ) the strength of this new distribution lies in its ability to model and fit various real-world data with lower error rates compared to other competing models.
This new model demonstrates its effectiveness in fitting the recovery rate for the CD34+ cells, which is a crucial indicator of the sufficiency of peripheral blood stem cells. A more recent purging technique involves the positive selection of CD34+ cells. By focusing on positive CD34+ cell selection, the variability in antigen expression on tumor cells becomes irrelevant, allowing this method to be applied for purging across almost all tumor types, assuming that the tumor cells do not express the CD34+ antigen. The most advanced method currently available for the positive selection of CD34+ cells, enabling their clinical isolation with extremely high purity, is magnetic-activated cell sorting (MACS).
Beyond its applications in modeling the recovery rates, it has a high impact on modeling failure times of components in economic and engineering contexts. Cost-effectiveness analysis (CEA) offers a systematic framework for regulators to compare the quantified benefits of legislative or regulatory decisions against their associated costs. When applied correctly, CEA compels policymakers and regulators to rigorously quantify the health or environmental benefits of potential government actions aimed at reducing risks. It also provides clear metrics for decision-makers, facilitating comparisons among different strategies for addressing the same issue, such as mitigating risks to human health, public safety, or the environment.
The ULE model, while promising and flexible, has some limitations that should be acknowledged as follows: (a) The estimation of parameters in the ULE model, particularly using methods like Bayesian approaches and maximum product spacings, can be computationally intensive. It requires unlimited computational resources and expertise in advanced numerical methods. (b) Although the ULE model was derived from the LE distribution, finding certain statistical properties, such as moments, often requires numerical solutions, such as the quasi-Newton and Markov chain Monte Carlo techniques, that can increase the computational burden and may introduce numerical stability issues in certain scenarios. (c) The ULE model is specifically designed for data within the unit interval (0,1), limiting its applicability to datasets that can be naturally transformed or confined to this range.
The remainder of this work is arranged as follows: In Section 2, the ULE distribution is presented and its properties are discussed. In Section 3, the maximum likelihood, Bayes, maximum product spacings, least squares estimate of θ and β, and the Fisher information matrix are formulated. In Section 4, simulation studies are introduced. In Section 5, a quantile regression model based on ULE distribution is obtained. In Section 6, the two datasets are studied. Finally, the concluding remarks are provided in Section 7.
Suppose X to be a continuous random variable with LE distribution having two parameters θ and β, then the cdf of the random variable Y=e−X can be represented by
FULE(y)=(1+(y−β−1)θ)−1,0<y<1,β,θ>0. | (2.1) |
The pdf for the ULE distribution is defined by
fULE(y)=θβy−(β+1)(y−β−1)θ−1(1+(y−β−1)θ)2,0<y<1. | (2.2) |
The hazard rate function (HRF) of ULE distribution is as follows
hULE(y)=θβy−(β+1)(y−β−1)(1+(y−β−1)θ). | (2.3) |
More statistical properties and related functions of the ULE distribution are presented in the following subsections.
The first derivative of the pdf of ULE distribution is
∂fULE(x)∂x=βθ(x−β−1)θΨ1(x)x2(xβ−1)2((x−β−1)θ+1)3, |
where Ψ1(x)=−βθ+(βθ−1)(x−β−1)θ+(β+1)xβ((x−β−1)θ+1)−1.
The above equations require a numerical evaluation to obtain the local maximum, minimum, and inflection points.
Figure 1 illustrates various possible pdf shapes for specific values of the parameters θ and β. The pdf can exhibit different forms, such as decreasing (D), increasing (I), unimodal, bathtub-shaped, or increasing-decreasing-increasing, depending on the chosen parameter values. This shape versatility makes the ULE distribution well-suited for analyzing data within the unit interval. Additionally, the left panel of Figure 2 provides a visual representation of the cdf of the ULE distribution, as defined in Eq (2.1).
For the hazard rate function of ULE distribution, we have
limy→1−hULE(y)=∞ |
and
limy→0+hULE(y)=0forβ>1;βθ>1. |
The first derivative for hULE(y) is given by
∂hULE(y)∂y=βθΨ2(y)y2(yβ−1)2((y−β−1)θ+1)2, |
where Ψ2(y)=(βθ−1)(y−β−1)θ+(β+1)yβ((y−β−1)θ+1)−1. Clearly, the sign of ∂hULE(y)∂y depends on the sign of Ψ(y). The HRF (2.3) is increasing for β>1 and βθ>1. Additionally, the right panel of Figure 2 shows that for different values of the parameters β and θ, the HRF for ULE can accommodate bathtub and IDI shapes.
The τth quantile of the ULE distribution has a simple closed form and will be utilized later to define a new quantile regression model based on ULE distribution. The quantile regression model for ULE distribution is given by
Q(τ)=[1+(1τ−1)1θ]−1β. | (2.4) |
A random variate can be generated via the inverse transformation method by
Y=[1+(1U−1)1θ]−1β,whereU∼Uniform(0,1). |
To analyze the effect of the parameters β and θ on the skewness and kurtosis of the ULE distribution, we check the trend of the Galton skewness (sk) and the Moors kurtosis (ku), as described in the following expressions:
sk=Q(34)+Q(14)−2Q(24)Q(34)−Q(14), |
and
ku=Q(38)−Q(18)+Q(78)−Q(58)Q(34)−Q(14), |
where Q(.) is the quantile function (2.4). Plots for both measures as functions of β and θ are given in Figure 3. It is observed that the skewness decreases with an increase of β. For β=1, the ULE distribution tends to be symmetric, and for 0<β<1, it is positively skewed and decreases toward 0 as θ increases. Conversely, for β>1, the ULE distribution has negative skewness, which increases toward 0 as θ increases. Additionally, the kurtosis increases with θ, while it initially decreases and then increases as β increases.
As stated by [26], the moment for the logistic exponential distribution exists but cannot be expressed in closed form. Also, the ULE distribution does not have a closed-form expression for its moment and must be computed numerically via
μ′r=E(yr)=∫10yrfULE(y)dy, |
where μ′r denotes the rth moment for ULE distribution.
When simulating different lifetime systems with some component structures, order statistics, also called ordered random variables, must be considered. Here, along with their typical distributional properties, the order statistics of the ULE distribution are presented. Let y1,y2,...,yn denote a random sample of size n from the ULE having cdf F(y) and pdf f(y). Then, the order statistics are denoted by, Y(1)≤Y(2)≤...≤Y(n), Y(1)=min(Y1,Y2,...,Yn), and Y(n)=max(Y1,Y2,...,Yn). The kth order statistic's probability density function is given by
fY(r)(y)=1β(r,n−r+1)f(y)[F(y)]r−1[1−F(y)]n−r=1β(r,n−r+1)n−r∑j=0(−1)j(n−rj)f(y)[F(y)]j+r−1=θββ(r,n−r+1)n−r∑j=0(−1)j(n−rj)y−(β+1)(y−β−1)θ−1(1+(y−β−1)θ)−j−r−1, |
where y∈(0,1).
In particular, the pdf of Y1, can be expressed as
fY(1)(y)=θββ(1,n)n−1∑j=0(−1)j(n−1j)y−(β+1)(y−β−1)θ−1(1+(y−β−1)θ)−j−2. |
Also, the pdf of Yn, can be expressed as
fY(n)(y)=θββ(n,1)y−(β+1)(y−β−1)θ−1(1+(y−β−1)θ)−n−1. |
Let Y1 and Y2 be two random variables with pdfs f1(y) and f2(y), respectively. Let us assume that Y1 is stochastically less than Y2 in terms of the likelihood ratio order (Y1≤lrY2) if the ratio f2(y)/f1(y) is non-decreasing concerning y.
The ULE distribution has a stochastic ordering as illustrated in the next preposition.
Preposition 2.1. Let Y1 and Y2 be two random variables, such that Y1∼ULE(θ1,β) and Y2∼ULE(θ2,β). If θ1≥θ2, then Y1≤lrY2.
Proof. The proof is straightforward by taking the first derivative of f2(u;θ2,β)/f1(y;θ1,β).
Likelihood ratio order implies a hazard rate order (Y1≤hrY2), which in turn implies the usual stochastic order (Y1≤stY2); for further details on stochastic orders, see [27].
In this research, four estimation methods—namely maximum likelihood estimation, Bayesian estimation, maximum product spacings, and the least squares estimate—are employed to ensure robust and comprehensive parameter estimation. Each method offers unique statistical properties and benefits, contributing to a whole evaluation of the model's performance. The maximum likelihood estimation is a widely used method known for its asymptotic efficiency and consistency, producing parameter estimates that maximize the likelihood of observing the given data. This makes it particularly useful when seeking estimates with desirable large-sample properties. Bayesian estimation, on the other hand, incorporates prior information along with the observed data, providing a flexible framework for parameter inference and enabling the calculation of credible intervals that offer a probabilistic interpretation of parameter uncertainty. The maximum product spacings method is used as an alternative to the maximum likelihood method, particularly effective when dealing with datasets that include extreme or non-uniformly distributed observations. It often provides more stable estimates in cases where traditional likelihood methods may face convergence issues. The least squares estimation method is selected for its simplicity and its ability to minimize the sum of squared differences between observed and predicted values, making it valuable for reducing overall estimation error.
The maximum likelihood (ML) estimation method is the most well-known classical inference in statistics. The ML estimates are obtained based on maximizing the log-likelihood function of the ULE distribution. Variance-covariance matrix of the unknown population parameters is utilized to construct the asymptotic confidence intervals for θ and β. Numerical methods are used to determine the required estimators, specifically adopting the well-known quasi-Newton method.
Suppose y1,y2,...,yn represent a random sample of size n with ULE distribution. The likelihood function for ULE distribution is given as
L(θ,β)=n∏i=θβy−(β+1)i(y−βi−1)θ−1(1+(y−β−1)θ)2=θnβnn∏i=1y−(β+1)i(y−βi−1)θ−1[1+(y−βi−1)θ]−2, | (3.1) |
and log-likelihood function of the random sample as
lnL(θ,β)=nln(θ)+nln(β)−(β+1)n∑i=1ln(yi)+(θ−1)n∑i=1ln(y−βi−1)−2n∑i=1ln[1+(y−βi−1)θ]. | (3.2) |
Taking the partial derivatives of Eq (3.2) with respect to β and θ, the ML estimates (ˆβ and ˆθ) are obtained by solving the following nonlinear systems of equations, as follows:
∂∂βlnL(θ,β)=nβ−n∑i=1lnyi−(θ−1)n∑i=1y−βiln(yi)(y−βi−1)+2n∑i=1θ(y−βi−1)θ−1y−βiln(yi)[1+(y−βi−1)θ] | (3.3) |
and
∂∂θlnL(θ,β)=nθ+n∑i=1ln(y−βi−1)−2n∑i=1(y−βi−1)θln(y−βi−1)[1+(y−βi−1)θ]. | (3.4) |
The log-likelihood equations defined in Eqs (3.3) and (3.4) do not provide a closed-form solution. Since obtaining a solution is challenging, it is often more practical to employ nonlinear optimization techniques, such as the quasi-Newton algorithm, to numerically maximize the log-likelihood function. Additionally, explicit confidence intervals for these parameters cannot be directly constructed, necessitating the use of approximate methods to determine the confidence intervals for θ and β. To achieve this, we first calculate the second-order partial derivatives, which are essential for deriving the Fisher information matrix.
So, the Fisher information matrix is given by
I−1=[−∂2∂θ2lnL(θ,β)−∂2∂θ∂βlnL(θ,β)−∂2∂β∂θlnL(θ,β)−∂2∂β2lnL(θ,β)]−1=[var(ˆθ)cov(ˆθ,ˆβ)cov(ˆβ,ˆθ)var(ˆβ)]−1, | (3.5) |
where
∂2∂θ2ln(L(θ,β))=−nα2−2n∑i=1(ln(y−βi−1))2(y−βi−1)[1+(y−βi−1)θ]2, |
∂2∂θ∂βln(L(θ,β))=∂2∂λ∂θln(L)=−n∑i=1y−βiln(yi)(y−βi−1)−2n∑i=1(y−βi)2ln(yi)(y−βi−1)θ[1+(y−βi−1)+θln(y−βi−1))θ](y−βi−1)(1+(y−βi−1)θ)2, |
and
∂2∂β2ln(L(θ,β))=−nβ2+β(θ−1)n∑i=1ln(yi)[y−β−1iy−βi−1+y−2β−1i(y−βi−1)2]+2n∑i=1θ(ln(yi))2(y−βi−1)θ[y−βi(1+(y−βi−1)θ)−θ](y−βi−1)(1+(y−βi−1)θ)2. |
The (1−ζ)100% confidence interval for the parameters β and θ can be presented as
(^θL,^θU)=ˆθ±z1−ζ2√var(ˆθ)and(^βL,^βU)=ˆβ±z1−ζ2√var(ˆβ), |
where ˆθ and ˆβ are the maximum likelihood estimators of θ and β, z1−ζ2 is the percent of the standard normal distribution, and var(ˆθ),var(ˆβ) are the asymptotic variances of ML computed utilizing the inverse for the information matrix.
This section addresses the challenge of deriving Bayes estimators (BE) for the shape and scale parameters of the ULE distribution. Depending on the information known about the parameter, one can choose between using informative or non- informative priors. However, when sample information about the parameters is available, it is preferable to utilize an informative prior. The Gamma prior is preferred in many Bayesian analyses because it is mathematically convenient, conceptually attractive, and computationally friendly. Its flexibility and conjugacy properties make it a powerful tool for representing prior knowledge and updating beliefs in light of new data; see Zellner et al. [28]. The following Gamma prior distributions are applied when θ and β are independent parameters.
π1(θ)∝θa1−1exp(−b1θ),θ>0 |
and
π2(β)∝βa2−1exp(−b2β),β>0, |
where ai and bi, i=1,2 are supposed to be known hyper-parameters and selected to reflect the prior distribution of the unknown parameters. Thus, we propose to use piecewise independent Gamma priors for both the shape and scale parameters for the ULE distribution because the Gamma distribution is very flexible; see Dey et al.[29] and Kundu and Howlader [30]. The joint prior distribution for θ and β is
π(θ,β)=θa1−1βa2−1exp(−b1θ−b2β). | (3.6) |
The posterior distribution is derived from the likelihood function Eq (3.1) and the prior distribution Eq (3.6). The joint posterior distribution of θ and β is denoted as
π∗(θ,β∣y_)=θa1−1βa2−1exp(−b1θ−b2β)n∏i=0y−(β+1)i(y−βi)θ−1[1+(y−βi−1)θ]−2. |
The conditional posterior densities of θ and β is presented as
π∗1(θ∣β,y_)=θa1−1exp(−b1θ)n∏i=0(y−βi)θ−1[1+(y−βi−1)θ]−2 | (3.7) |
and
π∗2(β∣θ,y_)=βa2−1exp(−b2β)n∏i=0y−(β+1)i(y−βi)θ−1[1+(y−βi−1)θ]−2. | (3.8) |
The Metropolis-Hastings sampler is necessary for implementing the MCMC technique because the conditional posteriors of θ and β in the previous equations do not conform to any standard distribution. Tierney [31] introduced the Metropolis-Hastings (M-H) algorithm within Gibbs sampling to generate posterior samples, and the process is outlined as follows:
1) Suggest initial values to be (θ(0),β(0)).
2) Let j=1.
3) Using the M-H algorithm, create θ(j) and β(j) by using M-H algorithm below using Eqs (3.7) and (3.8) with the normal distribution N(θj−1,var(θ)) and N(βj−1,var(β)) and the inverse Fisher information matrix is used to calculate var(θ) and var(β).
(a) We set a N(θj−1,var(θ)) and N(βj−1,var(β)) as a proposal distribution for θ∗ and β∗ respectively.
(b) The acceptance rule is defined by the probabilities ρθ and ρβ, which compare the proposed samples against the current samples. We use the acceptance probability:
ρθ=min[1,π∗1(θ∗∣βj−1,x)π∗1(αj−1∣βj−1,x)], |
ρβ=min[1,π∗2(β∗∣αj,,x)π∗2(βj−1∣αj,x)]. |
(c) Set u1 and u2 from a uniform (0,1) distribution.
(d) If u1<ρθ, the request is approved, so make a decision θj=θ∗ or else specify θj=θj−1.
(e) If u2<ρβ, the request is approved, so make a decision βj=β∗ or else specify βj=βj−1.
4) Calculating θ(j) and β(j).
5) Set j=j+1.
6) Steps from 3 to 5 must be repeated N times.
7) To find the credible intervals (CRIs) for α and β for ψjk, j=1,2,..,N, k=1,..,4 and (ψ1,ψ2)=(θ,β) as ψ1k<ψ2k<...<ψNk then the (1−γ)100% CRIs of ψk is
(ψk(γ2(N−M)),ψk((1−γ2)(N−M))). |
For practical application, we employ a burn-in period to allow the Markov chain to stabilize. Additionally, thinning reduces autocorrelation in the sampled chains, ensuring that the final posterior estimates represent independent samples. Specifically, after an initial burn-in of M samples, we retain every k−th sample to construct the posterior distribution, as indicated by ψ(j)k,j=M+1,...,N, for sufficiently large N.
The variances of the proposal distributions, var(θ) and var(β), are adjusted based on pilot runs to reach an acceptance rate that optimizes the performance of the M-H algorithm. Typically, an acceptance rate between 20% and 30% is targeted to balance convergence speed and sampling efficiency [32,33].
In Bayesian analysis, there are four basic elements: the data, the model, the prior, and the loss function. A good Bayesian estimator must minimize the loss function. So, in practical studies, using loss functions in Bayesian inference is important for several reasons, namely: (a) it helps in making decisions that are optimal under uncertainty, and (b) it quantifies the consequences of making incorrect inferences or decisions. By trying to minimize the error itself and the expected cost of errors, which is often more relevant in practice, (c) it can lead to more robust statistical methods. Several authors discussed many loss functions in Bayesian inference, while the squared error loss function was widely used.
The squared error loss (SEL) function is used to get the approximate Bayes estimates of ˆθ and ˆβ, being defined as L(ˆϕ−ϕ)=(ˆϕ−ϕ)2. The SEL function is symmetric, meaning that it gives equal weights to both over and under-estimation. In real life, we encounter many situations where over-estimation may be more serious than under-estimation, or vice versa. The Bayesian estimation under the SEL is given as:
ˆθBS=1N−MN∑j=M+1θ(j), |
and
ˆβBS=1N−MN∑j=M+1β(j). |
Cheng and Amin [34] and Ranneby [35], introduced the maximum product of spacings (MPS) estimation approach as an alternative ML estimation. They examined the properties of MPS estimators and demonstrated that in certain situations where ML estimators fail to provide consistent and asymptotically efficient estimates, MPS succeeds. These situations include instances where the likelihood function is unbounded from above, heavy-tailed distributions with unspecified scale and location parameters [36] and mixture distributions. Consequently, the MPS method overcomes the limitations of the ML method while preserving nearly all of its desirable properties in large sample sizes [37]. Assume the parameter vector υ=(θ,β), then the MPS function is presented as
MPS(υ)=1n+1n+1∑i=1log[F(y(i),θ,β)−F(y(i−1),θ,β)]. |
By maximizing MPS(υ) concerning υ, one can obtain MPS estimates. The following non-linear equations' simultaneous solutions are also provided for them:
∂MPS(υ)∂θ=1n+1n+1∑i=1[F(yi,θ,β)′;θ−F(yi−1,θ,β)′;θF(y(i)θ,β)−F(y(i−1)θ,β)]=0 |
and
∂MPS(υ)∂β=1n+1n+1∑i=1[F(yi,θ,β)′;β−F(yi−1,θ,β)′;βF(y(i),θ,β)−F(y(i−1)θ,β)]=0, |
where
F(yi,θ,β)′;θ=−(y−βi−1)θln(y−βi−1)(1+(y−βi−1)θ)−2 | (3.9) |
and
F(yi,θ,β)′;β=y−βiln(yi)(y−βi−1)θ−1(1+(y−βi−1)θ)−2. | (3.10) |
The asymptotic confidence intervals for the parameters β and θ, derived from the MPS estimation, rely on the established asymptotic equivalence between the ML and the MPS method, as noted by [34,38,39]. Thus, the 100(1−ζ)% confidence interval for the parameters using MPS is formulated as follows:
ˆθMPS±z1−ζ2√var(ˆθMPS)andˆβMPS±z1−ζ2√var(ˆβMPS), |
where ˆθMPS and ˆβMPS are the MPS estimates of θ and β, z1−ζ2 is the percent of the standard normal distribution, and var(ˆθMPS),var(ˆβMPS) are the asymptotic variances computed utilizing the inverse of the observed information matrix (3.5).
Least-squares methods produce the estimated parameters with the highest probability (maximum likelihood) of being true if several critical conditions are guaranteed. Least-squares parameter estimation is a basic procedure for evaluating the confidence intervals for the unknown model's parameters. It discusses the practical ways of applying least-squares techniques to experimental data.
The least square estimates (LSE) for ˆθLSE and ˆβLSE of θ and β, respectively, are observed by minimizing the function:
LSE(υ)=n∑i=1(F(yi,θ,β)−E[F(yi,θ,β)])2, |
with respect to υ, where E[F(yi,θ,β)]=i(n+1) for i=1,2,...,n.. Then, ˆθLSE and ˆβLSE are solutions of the following equations:
∂LSE(υ)∂θ=2n∑i=1F(yi,θ,β)′;θ(F(yi,θ,β)−i(n+1))=0 |
and
∂LSE(υ)∂β=2n∑i=1F(yi,θ,β)′;β(F(yi,θ,β)−i(n+1))=0, |
where F(yi,θ,β)′;θ and F(yi,θ,β)′;β as mentioned in Eqs (3.9) and (3.10).
The bootstrap percentile method (PCI) is commonly used to construct confidence intervals. The following procedure describes the steps to construct PCI:
ⅰ) Calculate (LSE) ˆβLSE and ˆθLSE for the ULE distribution.
ⅱ) Generate a bootstrap sample by using the estimates (ˆβLSE,ˆθLSE), and then obtain the bootstrap estimate, denoted as (ˆβb,ˆθb), based on the bootstrap sample.
ⅲ) Repeat step (ii) B times to obtain the estimates (ˆβb1,ˆθb1), (ˆβb2,ˆθb2), … (ˆβbB,ˆθbB).
ⅳ) Construct the 100(1−ζ)% bootstrap confidence intervals for the parameters β and θ by using the ζ2 and 1−ζ2 quantiles of the empirical distribution of the bootstrap estimates for β and θ, respectively.
The simulation study provides a comprehensive assessment of the estimation methods for estimating the parameters θ and β. By generating synthetic datasets with known parameter values, one can evaluate how well these estimation methods converge to the true values as the dataset size increases. Measures such as mean squared errors (MSEs) and average biases (ABs) are used to assess the convergence and accuracy of the point estimators. The performance of 95% confidence intervals is measured based on average widths (AW) and coverage probabilities (CP).
To generate observations, we used the ULE distribution and applied the inverse transformation method with various values of the parameters (θ,β)=(1,3), (0.45,0.5), (3,0.4), (0.25,1), and (1.9,0.8). These cases represent various trends of the pdf (2.2) for the ULE distribution. The simulation is redone 1000 times, with varying sample sizes of n=, 100, and 150 $. For each simulation iteration, we generate 1000 bootstrap samples and accumulate 10000 observations for the Markov chain Monte Carlo (MCMC) algorithm. We used the first 1000 observations as burn-in to minimize initial distribution effects. Additionally, in the thinning process, every fifth observation is selected to reduce dependence and enhance the robustness of the analysis. The simulation results are presented in Tables 1–5. Mathematica scripts were used for all simulation results.
n | Est. | ML | MPSE | Bayesian | LSE | |||||||
β | θ | β | θ | β | θ | β | θ | |||||
MSEs | 0.0037 | 0.148 | 0.0037 | 0.142 | 0.00349327 | 0.076 | 0.0037 | 0.127 | ||||
50 | ABs | 0.0019 | 0.073 | 0.0024 | -0.130 | 0.006 | 0.056 | -0.002 | -0.105 | |||
AWs | 0.228 | 1.441 | 0.239 | 1.352 | 0.229 | 1.28257 | 0.248 | 1.517 | ||||
CPs | 0.936 | 0.942 | 0.945 | 0.88 | 0.947 | 0.973 | 0.96 | 0.93 | ||||
MSEs | 0.0017 | 0.067 | 0.0016 | 0.071 | 0.0017 | 0.054 | 0.0019 | 0.063 | ||||
100 | ABs | 0.00027 | -0.0023 | -0.0014 | -0.103 | 0.0019 | 0.0077 | 0.007 | -0.027 | |||
AWs | 0.1629 | 0.995 | 0.166 | 0.964 | 0.163 | 0.943 | 0.170 | 1.115 | ||||
CPs | 0.95 | 0.942 | 0.955 | 0.893 | 0.953 | 0.953 | 0.925 | 0.965 | ||||
MSEs | 0.0011 | 0.043 | 0.001 | 0.049 | 0.0012 | 0.035 | 0.001 | 0.048 | ||||
150 | ABs | 0.0047 | 0.016 | 0.004 | -0.090 | 0.008 | -0.011 | 0.0001 | 0.003 | |||
AWs | 0.133 | 0.817 | 0.136 | 0.790 | 0.133 | 0.775481 | 0.135 | 0.925 | ||||
CPs | 0.958 | 0.952 | 0.968 | 0.905 | 0.973 | 0.967 | 0.935 | 0.965 |
n | Est. | ML | MPSE | Bayesian | LSE | |||||||
β | θ | β | θ | β | θ | β | θ | |||||
MSEs | 0.0108 | 0.005 | 0.011 | 0.005 | 0.027 | 0.0124 | 0.026 | 0.007 | ||||
50 | ABs | 0.0217 | 0.009 | 0.020 | -0.021 | 0.069 | 0.017 | 0.020 | 0.0006 | |||
AWs | 0.425 | 0.306 | 0.404 | 0.268 | 0.654 | 0.419 | 0.589 | 0.330 | ||||
CPs | 0.976 | 0.956 | 0.973 | 0.918 | 0.973 | 0.973 | 0.91 | 0.93 | ||||
MSEs | 0.0055 | 0.003 | 0.005 | 0.003 | 0.0134 | 0.007 | 0.011 | 0.004 | ||||
100 | ABs | 0.009 | -0.0004 | 0.004 | -0.018 | 0.032 | 0.011 | 0.011 | -0.003 | |||
AWs | 0.286 | 0.206 | 0.278 | 0.194 | 0.424 | 0.289 | 0.391 | 0.235 | ||||
CPs | 0.942 | 0.946 | 0.94 | 0.896 | 0.953 | 0.947 | 0.93 | 0.95 | ||||
MSEs | 0.0034 | 0.0017 | 0.003 | 0.002 | 0.005 | 0.0019 | 0.0057 | 0.0022 | ||||
150 | ABs | 0.0121 | -0.0003 | 0.002 | -0.013 | 0.018 | 0.0005 | 0.013 | -0.005 | |||
AWs | 0.231 | 0.166 | 0.226 | 0.160 | 0.236 | 0.168 | 0.317 | 0.191 | ||||
CPs | 0.968 | 0.958 | 0.963 | 0.93 | 0.887 | 0.933 | 0.96 | 0.93 |
n | Est. | ML | MPSE | Bayesian | LSE | |||||||
β | θ | β | θ | β | θ | β | θ | |||||
MSEs | 0.521 | 0.004 | 0.523 | 0.003 | 0.103 | 0.003 | 2.276 | 0.005 | ||||
50 | ABs | 0.163 | 0.007 | 0.161 | -0.018 | 0.035 | 0.012 | 0.242 | -0.0009 | |||
AWs | 3.009 | 0.254 | 2.831 | 0.220 | 1.960 | 0.218 | 4.488 | 0.286 | ||||
CPs | 0.976 | 0.954 | 0.973 | 0.905 | 1. | 0.953 | 0.92 | 0.92 | ||||
MSEs | 0.2443 | 0.002 | 0.264 | 0.002 | 0.138 | 0.0017 | 0.669 | 0.003 | ||||
100 | ABs | 0.067 | -0.0009 | 0.032 | -0.014 | 0.023 | 0.007 | 0.097 | -0.002 | |||
AWs | 2.006 | 0.170 | 1.934 | 0.159 | 1.588 | 0.158 | 2.981 | 0.202 | ||||
CPs | 0.968 | 0.948 | 0.935 | 0.905 | 0.973 | 0.967 | 0.93 | 0.95 | ||||
MSEs | 0.178 | 0.001 | 0.154 | 0.001 | 0.094 | 0.001 | 0.309 | 0.002 | ||||
150 | ABs | 0.082 | 0.00017 | 0.017 | -0.010 | 0.019 | 0.003 | -0.005 | 0.005 | |||
AWs | 1.616 | 0.137 | 1.577 | 0.132 | 1.379 | 0.131 | 2.297 | 0.166 | ||||
CPs | 0.95 | 0.946 | 0.965 | 0.925 | 0.967 | 0.94 | 0.95 | 0.95 |
n | Est. | ML | MPSE | Bayesian | LSE | |||||||
β | θ | β | θ | β | θ | β | θ | |||||
MSEs | 0.0017 | 0.020 | 0.002 | 0.018 | 0.001 | 0.0133 | 0.002 | 0.021 | ||||
50 | ABs | 0.006 | 0.024 | 0.007 | -0.045 | 0.006 | 0.011 | 0.003 | -0.009 | |||
AWs | 0.158 | 0.527 | 0.161 | 0.484 | 0.162 | 0.518 | 0.178 | 0.551 | ||||
CPs | 0.956 | 0.944 | 0.958 | 0.903 | 0.973 | 0.96 | 0.91 | 0.93 | ||||
MSEs | 0.0008 | 0.009 | 0.0007 | 0.010 | 0.0010 | 0.008 | 0.001 | 0.011 | ||||
100 | ABs | 0.002 | 0.0004 | -0.00004 | -0.036 | 0.008 | 0.0001 | 0.002 | -0.012 | |||
AWs | 0.110 | 0.362 | 0.110 | 0.347 | 0.113 | 0.360 | 0.123 | 0.395 | ||||
CPs | 0.958 | 0.936 | 0.96 | 0.898 | 0.933 | 0.953 | 0.93 | 0.95 | ||||
MSEs | 0.0006 | 0.006 | 0.0005 | 0.006 | 0.0005 | 0.007 | 0.0007 | 0.007 | ||||
150 | ABs | 0.0045 | 0.003 | 0.0002 | -0.026 | 0.0043 | 0.0011 | 0.0004 | -0.013 | |||
AWs | 0.0899 | 0.295 | 0.090 | 0.287 | 0.090 | 0.295 | 0.098 | 0.319 | ||||
CPs | 0.95 | 0.956 | 0.955 | 0.925 | 0.953 | 0.92 | 0.92 | 0.92 |
n | Est. | ML | MPSE | Bayesian | LSE | |||||||
β | θ | β | θ | β | θ | β | θ | |||||
MSEs | 0.128 | 0.012 | 0.130 | 0.011 | 0.063 | 0.011 | 0.210 | 0.015 | ||||
50 | ABs | 0.054 | 0.018 | 0.061 | -0.036 | 0.020 | 0.021 | 0.037 | -0.005 | |||
AWs | 1.397 | 0.438 | 1.411 | 0.400 | 1.226 | 0.420 | 1.672 | 0.461 | ||||
CPs | 0.948 | 0.952 | 0.973 | 0.898 | 0.973 | 0.973 | 0.91 | 0.93 | ||||
MSEs | 0.0609 | 0.006 | 0.059 | 0.007 | 0.051 | 0.006 | 0.098 | 0.008 | ||||
100 | ABs | 0.022 | -0.0007 | 0.005 | -0.028 | 0.036 | 0.004 | 0.022 | -0.008 | |||
AWs | 0.972 | 0.301 | 0.968 | 0.288 | 0.924 | 0.297 | 1.144 | 0.329 | ||||
CPs | 0.952 | 0.938 | 0.948 | 0.91 | 0.947 | 0.953 | 0.93 | 0.96 | ||||
MSEs | 0.040 | 0.004 | 0.039 | 0.004 | 0.036 | 0.004 | 0.049 | 0.005 | ||||
150 | ABs | 0.0408 | 0.0007 | 0.003 | -0.0201 | 0.050 | 0.004 | 0.033 | -0.007 | |||
AWs | 0.795 | 0.245 | 0.785 | 0.238 | 0.771 | 0.242 | 0.929 | 0.271 | ||||
CPs | 0.964 | 0.958 | 0.958 | 0.928 | 0.96 | 0.953 | 0.96 | 0.93 |
The findings from these tables regarding accuracy and biased-ness are summarized as follows:
● The MSE for all estimation methods decreases as the sample size increases from 50 to 150, indicating improved accuracy with larger sample sizes.
● The ABs generally reduce with larger sample sizes, suggesting that estimators become less biased.
For comparison of estimators from Tables 1–5, the following points are indicated:
● ML estimates show relatively low MSEs and consistent CPs, making it a reliable choice across sample sizes.
● MPSE has competitive MSEs and ABs, but its CPs for θ can be slightly lower, especially at smaller n.
● Bayesian estimation provides good MSEs and high CPs, often outperforming other methods in maintaining high coverage.
● The average width of the confidence intervals decreases as the sample size increases for all estimation techniques indicating more precise estimates with larger datasets. Also, it is observed that the approximate confidence interval (ACI) for MPS has smaller interval widths than the other estimation approaches.
● The coverage probabilities are close to the nominal level, showing that the confidence intervals produced by these methods effectively contain the true parameter values. It is observed that LSE is showing coverage probabilities higher than the nominal level.
Overall, Bayesian estimation offers strong performance with high CPs and reasonable AWs. ML estimation consistently performs well, with solid MSEs, ABs, and CPs, making it a dependable choice. While small-sample scenarios are not the primary focus of this study, we acknowledge their practical importance. Bootstrap or Bayesian approaches will be more appropriate for small-sample analyses rather than the asymptotic confidence intervals.
This section proposes a new quantile regression model based on ULE distribution for component response variables, offering a preference for commonly used beta, Kumaraswamy, and unit-Weibull regression models. When incorporating covariate information into regression analysis of a probability distribution, it is common to connect the mean response to the covariates. In contrast to modeling the mean of the response, quantile regression models were pioneered by [40] to model the conditional quantiles of the response variable as a function of the covariates without any distributional assumptions on the error term, and these models are also considered robust regressions.
Since the mean of the ULE distribution does not have a tractable closed-form expression, quantile regression models are a solid substitute. The quantile function of the ULE distribution has a simple treatable form, allowing its cdf and pdf to be expressed using a re-parametrization established on the quantile function in Eq (2.4). Let μ=Q(τ,θ,β) and the parameter β=−log[(1τ−1)1/θ+1]log(μ). Then, the cdf and pdf of the re-parametrized ULE distribution are given by
G(y,μ,θ)=[1+(ylog((1τ−1)1/θ+1)log(μ)−1)θ]−1 | (5.1) |
and
g(y,μ,θ)=θ[log(1/μ)]−1log((1/τ−1)1/θ+1)ylog((1τ−1)1/θ+1)log(μ)−1×[ylog((1τ−1)1/θ+1)log(μ)−1]θ−1[(ylog((1τ−1)1/θ+1)log(μ)−1)θ+1]−2, | (5.2) |
where 0<y<1,0<μ<1 and 0<τ<1.
Given independent random variables Xi;i=1,…,n from the re-parameterized ULE distribution with pdf in Eq (5.2), it is feasible to establish the ULE quantile regression. This regression assumes the functional relation of the median of Yi
\begin{equation*} g(\mu_i) = \boldsymbol{\delta}^{T}\boldsymbol{x_i}, \end{equation*} |
where \boldsymbol{x_i} = (1, x_{1i}, x_{2i}, \dots, x_{ki}) represents the covariates vector; also, \boldsymbol{\delta} = (\delta_{0}, \delta_{1}, \dots, \delta_{k})^{T} is the coefficients vector in the regression model. Various selections for the link function g(.) can be treated; however, in this context, we only consider the logit link function:
\begin{equation*} g(\mu_i) = \log\left(\frac{\mu_i}{1-\mu_i}\right). \end{equation*} |
Thus we have
\begin{equation} \mu_i = \frac{e^{\boldsymbol{\delta}^{T}\boldsymbol{x_i}}}{1+e^{\boldsymbol{\delta}^{T}\boldsymbol{x_i}}}, \: i = 1, \dots, n. \end{equation} | (5.3) |
When \tau = 0.5 , it becomes obvious that the conditional median response is obtained.
Referring to Eqs (5.2) and (5.3), the log-likelihood function of the ULE quantile regression model is given by
\begin{equation*} \begin{split} \ell({\bf{\Omega}}) = &n\log(\theta)- \sum\limits_{i = 1}^{n} \log (\log (1/\mu_i ))\\&+ n \log \left(\log \left((1/\tau -1)^{1/\theta }+1\right)\right)+\sum\limits_{i = 1}^{n}\log(y_i) \left[\frac{\log \left((\frac{1}{\tau }-1)^{1/\theta }+1\right)}{\log (\mu_i )}-1\right] \\ &+(\theta -1)\sum\limits_{i = 1}^{n} \log\left[y^{\frac{\log \left((\frac{1}{\tau }-1)^{1/\theta }+1\right)}{\log (\mu )}}-1\right]-2 \sum\limits_{i = 1}^{n} \log\left[\left(y^{\frac{\log \left(\left(\frac{1}{\tau }-1\right)^{1/\theta }+1\right)}{\log (\mu )}}-1\right)^{\theta }+1\right], \end{split} \end{equation*} |
where {\bf{\Omega}} = (\theta, \boldsymbol{\delta}^T)^T is the vector of the unknown parameters. The ML \hat{{\bf{\Omega}}} = (\hat{\theta}, \hat{\boldsymbol{\delta}}^T)^T can be obtained by directly maximizing the log-likelihood function \ell({\bf{\Omega}}) using mathematical packages such as Mathematica or R.
In the classical approach, the confidence intervals for the parameters were constructed based on asymptotic normality, i.e., \hat{{\bf{\Omega}}} can be approximated by a (k+1)- variate normal distribution with zero means and covariance matrix I^{-1}(\hat{{\bf{\Omega}}}) , where I({\bf{\Omega}}) is the observed information matrix defined by
\begin{equation*} I({\bf{\Omega}}) = -\left[\frac{\partial^2 \ell({\bf{\Omega}})}{\partial {\bf{\Omega}} \partial {\bf{\Omega}}^T} \right]. \end{equation*} |
The asymptotic (1-\alpha)100\% confidence interval for the vector parameter \Omega_{j}; j = 1, \dots, k+1 is \hat{\Omega}_{j}\pm{{z}_{\frac{\alpha }{2}}}se(\hat{\Omega}_{j}) , where {{z}_{\frac{\alpha }{2}}} represents the upper {\frac{\alpha }{2}} percentile with a standard normal distribution and se(\hat{\Omega}_{j}) is the square root of the j^{th} diagonal entry of the Fisher matrix I^{-1}({\hat{\bf{\Omega}}}) .
Residual analysis plays a vital role in testing the goodness of fit for a regression model. For this purpose, two types of residuals are employed: randomized quantile residual (RQR) [41] and Cox-Snell residual (CSR) [42]. The RQRs are computed as follows:
\begin{equation*} \hat{r}_i = \Phi^{-1}[G(y_i, \mu_i, \theta)], \end{equation*} |
where G(y_i, \mu_i, \theta) denotes the cdf of the ULE quantile regression model in Eq (5.1), and \Phi^{-1}[.] represents the inverse cdf of the standard normal distribution. The adequacy of the fitted model is indicated by the RQRs ( \hat{r}_i ) following a standard normal distribution.
On the other side, the Cox-Snell residuals are written as:
\begin{equation*} \hat{e}_i = - \log[\bar{G}(y_i, \mu_i, \theta)], \end{equation*} |
where \bar{G}(y_i, \mu_i, \theta) = 1 - G(x_i, \mu_i, \theta) represents the survival function of the ULE quantile model of regression. The Cox-Snell residuals ( \hat{e}_i ) are expected to follow an exponential distribution with a scale parameter of 1 .
In summary, both the randomized quantile residual and Cox-Snell residual are valuable tools to evaluate the goodness of fit for the ULE quantile regression model and can provide insights into the appropriateness of the model for a given dataset.
This subsection outlines a simulation study to assess the effectiveness of maximum likelihood estimators (MLEs) and their corresponding asymptotic confidence intervals (CIs) for the parameters in the ULE quantile regression model. The evaluation metrics include mean squared error (MSE) and average bias to assess the point estimates of the parameters, while coverage probability (CPs) along with average width (AWs) to assess the efficiency of the constructed 95\% CIs. Two distinct scenarios are considered within the simulation design:
(a) The case of one covariate: the simulation study is performed as
\begin{equation*} logit(\mu_i) = \delta_0 +\delta_1 y_{i1}, \; i = 1, \dots, n. \end{equation*} |
In this case, we consider (\delta_0, \delta_1) = (-2, 2) .
(b) The case of two covariates: the simulation study is performed as
\begin{equation*} logit(\mu_i) = \delta_0 +\delta_1 y_{i1}+\delta_2 y_{i2}, \; i = 1, \dots, n. \end{equation*} |
In this case, we consider (\delta_0, \delta_1, \delta_2) = (2, -2, 2) .
In both cases, the true value for the parameter \theta is 0.5 and the quantile values are \tau = 0.1, 0.25, 0.5, 0.75, and 0.9 . The covariate x_{i1} is generated from Bernoulli distribution with parameter 0.5 while the covariate x_{i2} is generated from the standard normal distribution for n = 50, 100 and 150 . For each scenario, the simulation is repeated 1000 times.
The simulation results are given in Tables 6 and 7. The results in these tables confirm desirable properties for the maximum likelihood estimates for the parameters of the ULE quantile regression. From Tables 6 and 7, we observe that the MSE, ABs, and AWs decrease with increasing sample size. These metrics are consistently attaining their lowest values for the parameter \theta . The MSEs and ABs attain their highest values when \tau = 0.9 , particularly for small sample sizes. The coverage probabilities remain close to the nominal 95\% confidence level.
\tau | n | \theta | \delta_0 | \delta_1 | \delta_2 | |||||||||||||||
MSEs | ABs | AWs | CPs | MSEs | ABs | AWs | CPs | MSEs | ABs | AWs | CPs | MSEs | ABs | AWs | CPs | |||||
0.1 | 50 | 0.007 | 0.016 | 0.303 | 0.943 | 0.097 | 0.033 | 1.202 | 0.943 | 0.251 | 0.080 | 1.960 | 0.951 | 0.051 | 0.023 | 0.871 | 0.96 | |||
100 | 0.003 | 0.006 | 0.207 | 0.966 | 0.053 | 0.012 | 0.835 | 0.946 | 0.102 | 0.012 | 1.252 | 0.937 | 0.038 | 0.010 | 0.725 | 0.946 | ||||
150 | 0.002 | -0.002 | 0.167 | 0.946 | 0.035 | 0.024 | 0.705 | 0.931 | 0.072 | 0.004 | 1.009 | 0.926 | 0.024 | 0.011 | 0.593 | 0.963 | ||||
0.25 | 50 | 0.008 | 0.018 | 0.304 | 0.908 | 0.092 | 0.063 | 1.202 | 0.944 | 0.281 | -0.033 | 1.986 | 0.952 | 0.0553 | 0.016 | 0.882 | 0.944 | |||
100 | 0.003 | 0.004 | 0.206 | 0.944 | 0.053 | 0.023 | 0.833 | 0.924 | 0.0998 | 0.016 | 1.243 | 0.964 | 0.038 | 0.010 | 0.710 | 0.944 | ||||
150 | 0.002 | -0.002 | 0.167 | 0.932 | 0.033 | 0.006 | 0.703 | 0.948 | 0.061 | 0.003 | 1.013 | 0.952 | 0.024 | 0.008 | 0.604 | 0.944 | ||||
0.5 | 50 | 0.008 | 0.015 | 0.300 | 0.912 | 0.121 | 0.047 | 1.381 | 0.948 | 0.293 | -0.031 | 2.006 | 0.956 | 0.058 | 0.023 | 0.897 | 0.936 | |||
100 | 0.003 | 0.004 | 0.205 | 0.94 | 0.065 | 0.023 | 0.942 | 0.936 | 0.099 | 0.016 | 1.246 | 0.968 | 0.039 | 0.011 | 0.716 | 0.936 | ||||
150 | 0.002 | -0.002 | 0.166 | 0.928 | 0.042 | 0.016 | 0.789 | 0.944 | 0.062 | 0.005 | 1.015 | 0.952 | 0.024 | 0.007 | 0.609 | 0.948 | ||||
0.75 | 50 | 0.006 | 0.016 | 0.273 | 0.928 | 0.437 | -0.026 | 2.367 | 0.932 | 0.269 | -0.017 | 2.0686 | 0.968 | 0.081 | 0.077 | 0.991 | 0.932 | |||
100 | 0.002 | 0.007 | 0.192 | 0.96 | 0.163 | 0.005 | 1.614 | 0.976 | 0.120 | -0.037 | 1.285 | 0.952 | 0.047 | 0.050 | 0.794 | 0.952 | ||||
150 | 0.002 | -0.001 | 0.153 | 0.932 | 0.126 | 0.041 | 1.310 | 0.924 | 0.060 | -0.018 | 1.033 | 0.984 | 0.032 | 0.026 | 0.648 | 0.94 | ||||
0.9 | 50 | 0.006 | 0.048 | 0.256 | 0.952 | 0.810 | -0.420 | 3.339 | 0.968 | 0.260 | 0.014 | 2.213 | 0.964 | 0.097 | 0.060 | 1.181 | 0.952 | |||
100 | 0.004 | 0.041 | 0.187 | 0.852 | 0.586 | -0.403 | 2.327 | 0.876 | 0.156 | 0.002 | 1.396 | 0.94 | 0.082 | 0.040 | 0.958 | 0.94 | ||||
150 | 0.003 | 0.036 | 0.148 | 0.876 | 0.329 | -0.313 | 1.836 | 0.892 | 0.079 | -0.017 | 1.104 | 0.96 | 0.047 | -0.024 | 0.759 | 0.932 |
\tau | n | \theta | \delta_0 | \delta_1 | |||||||||||
MSEs | ABs | AWs | CPs | MSEs | ABs | AWs | CPs | MSEs | ABs | AWs | CPs | ||||
0.1 | 50 | 0.007 | 0.011 | 0.296 | 0.916 | 0.338 | -0.025 | 2.504 | 0.968 | 0.480 | 0.078 | 2.831 | 0.968 | ||
100 | 0.003 | 0.005 | 0.206 | 0.956 | 0.158 | 0.013 | 1.592 | 0.948 | 0.199 | 0.020 | 1.895 | 0.972 | |||
150 | 0.002 | -0.002 | 0.167 | 0.948 | 0.103 | 0.0003 | 1.336 | 0.956 | 0.150 | -0.002 | 1.569 | 0.948 | |||
0.25 | 50 | 0.007 | 0.012 | 0.297 | 0.908 | 0.349 | -0.046 | 2.520 | 0.976 | 0.518 | 0.099 | 2.846 | 0.968 | ||
100 | 0.003 | 0.005 | 0.206 | 0.952 | 0.188 | -0.007 | 1.602 | 0.912 | 0.230 | 0.045 | 1.905 | 0.956 | |||
150 | 0.002 | -0.001 | 0.167 | 0.920 | 0.118 | -0.020 | 1.344 | 0.972 | 0.163 | 0.030 | 1.574 | 0.948 | |||
0.5 | 50 | 0.007 | 0.016 | 0.299 | 0.929 | 0.425 | -0.030 | 2.806 | 0.954 | 0.440 | 0.039 | 2.889 | 0.974 | ||
100 | 0.003 | 0.004 | 0.206 | 0.954 | 0.217 | -0.016 | 1.837 | 0.940 | 0.240 | 0.017 | 1.954 | 0.960 | |||
150 | 0.002 | -0.002 | 0.167 | 0.951 | 0.153 | 0.010 | 1.519 | 0.943 | 0.170 | 0.027 | 1.596 | 0.949 | |||
0.75 | 50 | 0.007 | 0.012 | 0.298 | 0.936 | 2.089 | -0.254 | 5.308 | 0.944 | 0.974 | 0.242 | 3.753 | 0.940 | ||
100 | 0.003 | 0.005 | 0.206 | 0.952 | 0.650 | -0.064 | 3.424 | 0.956 | 0.346 | 0.061 | 2.360 | 0.936 | |||
150 | 0.002 | -0.003 | 0.166 | 0.936 | 0.561 | 0.012 | 2.785 | 0.928 | 0.237 | 0.031 | 1.899 | 0.936 | |||
0.9 | 50 | 0.007 | 0.021 | 0.309 | 0.928 | 4.413 | 0.488 | 3.186 | 0.898 | 1.563 | -0.764 | 1.728 | 0.902 | ||
100 | 0.003 | 0.008 | 0.214 | 0.980 | 1.707 | 0.817 | 2.539 | 0.902 | 0.986 | -0.893 | 1.068 | 0.880 | |||
150 | 0.002 | 0.007 | 0.173 | 0.960 | 1.518 | 0.873 | 1.606 | 0.926 | 0.989 | -0.925 | 0.630 | 0.928 |
This section explores the advantages of the practical use of ULE distribution and its regression model compared to other models commonly used for modeling data within unit intervals. The unit models used in this comparison are the Beta, Kumaraswamy, Topp-Leon, unit-Burr Ⅻ, and unit half-logistic geometric (UHLG) distributions.
To assess and compare the fitted distributions, we employ various goodness-of-fit measures. These include the Kolmogorov-Smirnov distance (KS) with corresponding p-value, Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC), Anderson-Darling statistic ( A^\star ), and Cramer-von Mises statistic ( W^\star ).
Additionally, two graphical methods are used to assess the goodness of fit for the regression models:
i) The quantile-quantile (QQ) plot, which compares randomized quantile residuals against normal quantiles with a simulated enclosure, as proposed by Atkinson [43].
ii) The probability-probability (PP) plot compares the empirical probabilities of Cox-Snell residuals with those from the standard exponential distribution.
The model is considered the best fit for the data if its values are closest to the diagonal bar in both plots. Mathematica scripts were used for all numerical calculations and to generate the figures.
The recovery rate of CD34+ cells is a crucial factor in assessing the sufficiency of peripheral blood stem cell (PBSC) collection for bone marrow transplantation. This procedure facilitates rapid hematologic recovery following myeloablative therapy for various malignant hematological diseases.
This study analyzed data from 239 patients who underwent autologous PBSC transplantation, focusing on CD34+ cell recovery rates as documented in [44]. Additionally, we incorporated relevant covariates that could potentially influence CD34+ cell viability recovery:
● Gender ( y_1 ): 0 denotes female and 1 denotes male;
● Chemotherapy ( y_2 ): 0 for a single-day chemotherapy regimen and 1 for a three-day regimen;
● Age ( y_3 ): Adjusted patient's age, calculated as the current age minus 40.
A univariate modeling approach was employed to assess the suitability of the ULE distribution for modeling CD34+ cell recovery rates. The ML estimates of the ULE distribution's parameters and various goodness-of-fit statistics were calculated and are presented in Table 8. The results from this table strongly suggest that the ULE distribution provides the best fit for the given dataset and it generally outperforms other models across multiple criteria (AIC, BIC, -Loglik, W^\star , A^\star ), indicating that it best fits the CD34+ cell recovery data. The Unit-Omega and Unit-BurrXII models perform well but do not surpass the ULE model according to AIC, BIC, or -Loglik. To further visualize this relationship, Figure 4 was generated. This figure includes histogram plots of the data, overlaid with the estimated densities of the ULE distribution, as well as the Kaplan-Meier curve and the estimated survival curves.
Model | Estimates (Standard Errors) | K-S (p-value) | AIC | BIC | -Loglik | W^\star | A^\star | |
\beta | \theta | |||||||
ULE | 3.2454 | 1.77959 | 0.0500323 | -383.997 | -377.044 | -193.998 | 0.0675539 | 0.48204 |
(0.142222) | (0.100898) | (0.587827) | ||||||
Beta | 2.28593 | 8.66714 | 0.0650045 | -379.734 | -372.782 | -191.867 | 0.14045 | 0.87598 |
(0.458761) | (0.332228) | (0.264737) | ||||||
Kumaraswamy | 2.43553 | 6.69423 | 0.0722763 | -377.528 | -370.575 | -190.764 | 0.191987 | 1.14739 |
(0.458761) | (0.332228) | (0.16457) | ||||||
Unit-Omega | 3.6080 | 7.7337 | 0.05331 | -383.955 | -377.002 | -193.977 | 0.0820 | 0.5276 |
(0.3322) | (0.4588) | (0.5055) | ||||||
Unit- BurrXII | 1.73211 | 10.076 | 0.0522282 | -383.005 | -376.052 | -193.503 | 0.0888576 | 0.582411 |
(0.458761) | (0.332228) | (0.532105) |
To derive Bayesian estimates and credible intervals, we run the MCMC algorithm 55,000 times, discarding the first 5,000 iterations as burn-in and selecting every 5th value for thinning. The burn-in period is determined based on pilot runs of the Metropolis-Hastings algorithm. Specifically, we monitor the convergence behavior of the chains for the parameters \theta and \beta , examining trace plots to identify when the Markov chain stabilizes around the posterior distribution Figure 5 shows a smooth histogram of the marginal posterior density, a trace plot where each chain remains within a similar region without distinct trends, and results from the first 100 lags of the autocorrelation function (ACF). These observations reflect the efficient convergence of the MCMC chain.
Percentile bootstrap confidence intervals were computed from 1,000 bootstrap samples of the parameters. Table 9 displays the maximum likelihood estimates (MLE), bootstrap standard errors (BSE), and the associated 95\% confidence intervals. It is evident that the MLE, BSE, Mean Posterior Standard Errors (MPSE), and LSE for the parameter \theta are closely aligned, with confidence intervals that are nearly of the same length.
Parameter | MLE | MPSE | BSE | LSE | |||||||
MLE | ACI_{ML} | MPSE | ACI_{MPS} | BSE | CrI | LSE | BCI | ||||
\theta | 1.77959 | (1.5818, 1.9774) | 1.8044 | (1.6114, 2.0062) | 1.7469 | (1.5527, 1.9412) | 1.8551 | (1.6090, 2.0701) | |||
\beta | 3.2454 | (2.9667, 3.5241) | 3.2254 | (2.9628, 3.5129) | 3.2453 | (2.9631, 3.5275) | 3.1989 | (2.9563, 3.4995) |
A multivariate regression model is now employed to examine the effects of gender, chemotherapy, and age on the recovery rate. The proposed regression model is:
\begin{equation*} logit(\mu_i) = \delta_0+\delta_1 y_{i1}+\delta_2 y_{i2}+\delta_3 y_{i3}; i = 1, \dots, 239. \end{equation*} |
Table 10 presents the fitting results for beta, Kumaraswamy, and ULE regression models. Table 10 shows MLE for \alpha , and \delta_i, i = 0, 1, 2, 3 , along with their respective standard errors (SE) and p-values. Additionally, each regression model is accompanied by AIC and BIC statistics.
Parameters | Beta | Kumaraswamy | ULE | ||||||||
Estimate | SE | p-value | Estimate | SE | p-value | Estimate | SE | p-value | |||
\delta_0 | 0.9990 | 0.1291 | < 0.0000 | 1.1997 | 0.1397 | < 0.0000 | 1.04811 | 0.135555 | < 0.0000 | ||
\delta_1 | 0.0659 | 0.0939 | 0.483 | 0.0418 | 0.0955 | 0.6619 | 0.100013 | 0.07255 | 0.169365 | ||
\delta_2 | 0.2116 | 0.1038 | 0.0425 | 0.1833 | 0.1150 | 0.1123 | 0.21845 | 0.104438 | .037553 | ||
\delta_3 | 0.0142 | 0.0054 | 0.0088 | 0.0107 | 0.0059 | 0.0692 | 0.01709 | 0.00547946 | 0.002043 | ||
\alpha | 11.3447 | 1.0181 | < 0.0000 | 6.7274 | 0.4543 | < 0.0000 | 1.82216 | 0.10317 | < 0.0000 | ||
AIC | -381.79 | -375.66 | -388.659 | ||||||||
BIC | -364.41 | -358.28 | -371.276 |
According to Table 10, there is a statistically significant positive effect of the type of chemotherapy and age on the recovery rate of CD34+ cells, whereas the gender parameter does not significantly affect the recovery rate in any of the regression models.
Additionally, among the beta and Kumaraswamy regression models, the ULE regression model offers the best fit to the data, as indicated by its lower AIC and BIC values. This finding is further explored in Figure 6, which displays the P-P plot of Cox-Snell residuals and Q-Q plots of the randomized quantile residuals with a simulated envelope for all the fitted regression models.
The dataset in question is presented by Schmit and Roth [45], which includes 73 responses from a survey distributed to 374 risk managers of major North American corporations. Schmit and Roth [45] sought to assess the cost-effectiveness of different management strategies in reducing a company's risk of property losses and accidents, taking into account specific company characteristics like size and industry. The response variable, y (Firmcost), represents the firm-specific ratio of premiums plus uninsured losses relative to total assets. The covariates associated with this response variable are as follows:
● Suppose ( y_1 ): the firm-specific ratio of the total per-occurrence retention levels, as assessed by the corporate risk manager.
● Cap ( y_2 ): 1 if the firm utilizes a captive insurance, and zero if it does not.
● Sizelog ( y_3 ): the logarithm of the firm's total asset value.
● Indcost ( y_4 ): the industry average of premiums and uninsured losses relative to total assets, as reported in the 1985 cost of risk survey (a risk measurement).
● Central ( y_5 ): the role of the local manager in determining local retention levels, as evaluated by the corporate risk manager.
● Sizelog ( y_6 ): the significance of analytical tools in risk management decision-making, as assessed by the corporate risk manager.
Initially, univariate modeling is applied to the response variable, risk management cost-effectiveness data, to evaluate the performance of the ULE distribution. Table 11 presents the ML estimates for the parameters along with the goodness-of-fit statistics. The findings in Table 11 indicate that the ULE model provides the best overall fit to the risk management cost-effectiveness data, as evidenced by its favorable AIC, BIC, -Loglik, and goodness-of-fit test results (K-S, Cramér-von Mises, and Anderson-Darling). The unit-Omega model performs reasonably well but does not surpass the ULE model across the proposed criteria. This conclusion is further illustrated in Figure 7, which displays histogram plots of the data with the estimated densities, as well as the Kaplan-Meier curve with the estimated survival curves.
Model | Estimates (Standard Errors) | K-S (p-value) | AIC | BIC | -Loglik | Cramér-von Mises | Anderson-Darling | |
\beta | \theta | |||||||
ULE | 0.257 | 2.582 | 0.068 | -170.611 | -166.03 | -87.306 | 0.096 | 0.874 |
(0.142) | (0.101) | (0.893) | ||||||
Beta | 3.798 | 0.613 | 0.181 | -148.235 | -143.654 | -76.118 | 0.697 | 3.961 |
(0.459) | (0.332) | (0.017) | ||||||
Kumaraswamy | 3.441 | 0.665 | 0.154 | -153.308 | -148.727 | -78.654 | 0.502 | 3.097 |
(0.459) | (0.332) | (0.064) | ||||||
Unit Omega | 5.377 | 0.787 | 0.128 | -163.383 | -158.802 | -83.691 | 0.336 | 2.153 |
(0.928) | (0.073) | (0.180) | ||||||
Unit-BurrXII | 0.348 | 2.841 | 0.338 | -89.013 | -84.432 | -46.507 | 2.662 | 12.874 |
(0.0625) | (0.421) | (0.0) |
To compute the Bootstrap errors (BE) and confidence intervals (CI), we ran the MCMC algorithm 55,000 times, discarding the initial 5,000 iterations as burn-in and selecting every 5th value for thinning. As mentioned in the previous section, the pilot runs of the Metropolis-Hastings algorithm determine the burn-in period through monitoring convergence trace, as shown in Figure 8. This figure illustrates a smooth and well-behaved histogram of the marginal posterior density, a trace plot showing that each chain consistently explores the same region without evident trends, and the results from the first 100 lags of the autocorrelation function (ACF). These findings illustrate fast convergence performance for the MCMC chain. Percentile bootstrap confidence intervals were obtained from 1,000 bootstrap samples of the parameters. Table 12 presents the ML estimates, bootstrap standard errors (BSE), and the corresponding 95\% confidence intervals. It is noted that the MLE, BSE, mean posterior standard errors (MPSE), and LSE of parameter \theta are similar, with their confidence intervals nearly the same length. The regression model for \mu_i is presented as
\begin{equation*} logit(\mu_i) = \delta_0+\delta_1 y_{i1}+\delta_2 y_{i2}+\delta_3 y_{i3}+\delta_4 y_{i4}+\delta_5 y_{i5}+\delta_6 y_{i6}; \; i = 1, \dots, 73. \end{equation*} |
Parameter | MLE | MPSE | BSE | LSE | |||||||
MLE | ACI_{ML} | MPSE | ACI_{MPS} | BSE | CrI | LSE | BCI | ||||
\theta | 2.5823 | (2.3845, 2.7800) | 2.4194 | (1.9312, 2.9077) | 2.5500 | (2.0647, 3.0615) | 3.0267 | (2.4038, 3.6891) | |||
\beta | 0.2572 | (-0.0215, 0.5359 | 0.2579 | (0.2290, 0.2868) | 0.2581 | (0.2314, 0.2873) | 0.2536 | (0.2303, 0.279) |
Table 13 presents the fitting results for beta, Kumaraswamy, and ULE regression models. It shows ML estimates for \alpha , and \delta_i, i= 0, 1, 2, 3, 4, 5, 6 , along with their respective standard errors (SE) and p-values. Additionally, each regression model is accompanied by AIC and BIC statistics.
Parameters | Beta | Kumaraswamy | ULE | ||||||||
Estimate | SE | p-value | Estimate | SE | p-value | Estimate | SE | p-value | |||
\delta_0 | 1.888 | 1.172 | 0.112 | 2.539 | 1.550 | 0.106 | 3.823 | 1.463 | 0.011 | ||
\delta_1 | -0.001 | 0.014 | 0.932 | -0.036 | 0.018 | 0.043 | -0.014 | 0.012 | 0.232 | ||
\delta_2 | 0.178 | 0.232 | 0.445 | 0.596 | 0.390 | 0.131 | 0.142 | 0.250 | 0.573 | ||
\delta_3 | -0.512 | 0.123 | < 0.000 | -0.798 | 0.161 | < 0.000 | -0.872 | 0.149 | < 0.000 | ||
\delta_4 | 1.236 | 0.459 | 0.009 | 5.257 | 1.436 | 0.0005 | 1.799 | 0.444 | 0.0001 | ||
\delta_5 | -0.012 | 0.088 | 0.890 | -0.028 | 0.120 | 0.817 | -0.070 | 0.101 | 0.493 | ||
\delta_6 | -0.004 | 0.021 | 0.855 | -0.027 | 0.032 | 0.396 | 0.006 | 0.024 | 0.808 | ||
\alpha | 6.331 | 1.123 | < 0.000 | 0.978 | 0.106 | < 0.000 | 3.503 | 0.357 | < 0.000 | ||
AIC | -159.446 | -181.653 | -200.186 | ||||||||
BIC | -141.122 | -163.33 | -181.863 |
Based on Table 13, we can notice that there is a statistically significant positive impact of the type of chemotherapy and age on the risk management cost-effectiveness data, while the parameter gender has no statistically significant impact on the response variable recovery rate at the usual level for all regression models.
Furthermore, compared to the beta and Kumaraswamy regression models, the ULE regression model provides the most optimal fit to the data as it has the least AIC and BIC. Figure 9 provides a deeper analysis of this result, displaying the P-P plot of Cox-Snell residuals and the Q-Q plots of randomized quantile residuals, complete with a simulated envelope, for all the regression models fitted.
This paper introduced a versatile two-parameter unit distribution called the ULE distribution, designed for modeling datasets confined to the interval (0, 1) . We derived and analyzed the survival function and hazard rate function of the ULE distribution, presenting their behavior through graphical representations. We investigated various methods for parameter estimation of the ULE distribution, including maximum likelihood estimation, maximum product spacing, least squares, and Bayesian estimation. An extensive simulation study reveals that Bayesian estimation demonstrates robust performance, characterized by high CPs and acceptable AWs. ML estimate consistently shows strong results with reliable MSEs, ABs, and CPs, making it a trustworthy option. LSE and MPSE may exhibit greater variability with smaller sample sizes, but their performance improves and remains satisfactory as n increases. Additionally, we proposed a novel quantile regression model based on the ULE distribution for unit response variables, providing an alternative to the commonly used beta, Kumaraswamy, and Unit-Weibull regression models. It was found that the randomized quantile residual and Cox-Snell residual are useful tools for assessing the goodness of fit of the ULE quantile regression model, offering valuable insights into the model's suitability for a specific dataset. Our empirical analysis focused on datasets related to the recovery rate of CD34+ cells and the cost-effectiveness of risk management. The numerical analysis with the help of goodness of fit tests emphasizes the suitability of the ULE distribution to model these data. One of the limitations of this model is the time and cost constraints, as we consider a complete data set, units may take a long time to fail, or it can be expensive to perform a complete sample experiment, especially in medical and biological fields. Censoring schemes can be employed to adjust this limitation. Hence, our plan for future work is to implement censored samples instead of complete ones. Another limitation is that even though the ULE distribution is proposed as effective for fitting real-world data, its performance might vary depending on the nature and characteristics of the datasets. It may not always outperform existing models in all scenarios, especially if the data do not exhibit the specific long-tailed behavior for which the ULE distribution is suited. The ULE distribution is defined on the unit interval (0, 1) . This limitation implies that it may not be suitable for datasets that extend beyond this interval or where transformations to fit this range introduce bias or distortions.
Hanan Haj Ahmad: Methodology, validation, investigation, resources, writing–review & editing, funding acquisition; Kariema A. Elnagar: Methodology, conceptualization, investigation, resources, data curation, writing–original draft, Writing–review & editing. All authors have read and approved the final version of the manuscript for publication.
This work was supported by the Deanship of Scientific Research, Vice Presidency for Graduate Studies and Scientific Research, King Faisal University, Saudi Arabia [GRANT No. KFU242542].
The authors declare no conflicts of interest.
[1] | M. Ç. Korkmaz, The unit generalized half normal distribution: A new bounded distribution with inference and application, U.P.B. Sci. Bull., 82 (2020), 133–140. |
[2] |
M. E. Ghitany, J. Mazucheli, A. F. B. Menezes, F. Alqallaf, The unit-inverse Gaussian distribution: A new alternative to two-parameter distributions on the unit interval, Comm. Statist. Theory Methods, 48 (2019), 3423–3438. https://doi.org/10.1080/03610926.2018.1476717 doi: 10.1080/03610926.2018.1476717
![]() |
[3] |
P. C. Consul, G. C. Jain, On the log-gamma distribution and its properties, Statistische Hefte, 12 (1971), 100–106. https://doi.org/10.1007/BF02922944 doi: 10.1007/BF02922944
![]() |
[4] | J. Mazucheli, A. F. B. Menezes, M. E. Ghitany, The unit-Weibull distribution and associated inference, J. Appl. Probab. Stat., 13 (2018), 1–22. |
[5] |
J. Mazucheli, A. F. Menezes, S. Dey, Unit-Gompertz distribution with applications, Statistica, 79 (2019), 25–43. https://doi.org/10.6092/issn.1973-2201/8497 doi: 10.6092/issn.1973-2201/8497
![]() |
[6] |
M. M. E. Abd El-Monsef, M. M. El-Awady, M. M. Seyam, A new quantile regression model for modeling child mortality, Int. J. Biomath., 15 (2022), 2250031. https://doi.org/10.1142/S1793524522500310 doi: 10.1142/S1793524522500310
![]() |
[7] |
M. Ç. Korkmaz, C. Chesneau, On the unit Burr-XII distribution with the quantile regression modeling and applications, Comp. Appl. Math., 40 (2021), 29. https://doi.org/10.1007/s40314-021-01418-5 doi: 10.1007/s40314-021-01418-5
![]() |
[8] |
H. S. Bakouch, A. S. Nik, A. Asgharzadeh, H. S. Salinas, A flexible probability model for proportion data: Unit-half-normal distribution, Comm. Statist. Case Stud. Data Anal. Appl., 7 (2021), 271–288. https://doi.org/10.1080/23737484.2021.1882355 doi: 10.1080/23737484.2021.1882355
![]() |
[9] |
H. Haj Ahmad, E. M. Almetwally, M. Elgarhy, D. A. Ramadan, On unit exponential Pareto distribution for modeling the recovery rate of COVID-19, Processes, 11 (2023), 232. https://doi.org/10.3390/pr11010232 doi: 10.3390/pr11010232
![]() |
[10] |
A. S. Hassan, A. Fayomi, A. Algarni, E. M. Almetwally, Bayesian and non-Bayesian inference for unit-exponentiated half-logistic distribution with data analysis, Appl. Sci., 12 (2022), 11253. https://doi.org/10.3390/app122111253 doi: 10.3390/app122111253
![]() |
[11] | A. S. Hassan, A. M. Khalil, H. F. Nagy, Data analysis and classical estimation methods of the bounded power Lomax distribution, Reliab. Theory Appl., 19 (2024), 770–789. |
[12] |
A. Fayomi, A. S. Hassan, E. M. Almetwally, Inference and quantile regression for the unit-exponentiated Lomax distribution, Plos one, 18 (2023), e0288635. https://doi.org/10.1371/journal.pone.0288635 doi: 10.1371/journal.pone.0288635
![]() |
[13] |
A. Fayomi, A. S. Hassan, H. Baaqeel, E. M. Almetwally, Bayesian inference and data analysis of the unit-power Burr X distribution, Axioms, 12 (2023), 297. https://doi.org/10.3390/axioms12030297 doi: 10.3390/axioms12030297
![]() |
[14] |
A. S. Hassan, R. E. Mohamed, O. Kharazmi, H. F. Nagy, A new four-parameter extended exponential distribution with statistical properties and applications, Pak. J. Stat. Oper. Res., 18 (2022), 179–193. https://doi.org/10.18187/pjsor.v18i1.3872 doi: 10.18187/pjsor.v18i1.3872
![]() |
[15] |
R. D. Gupta, D. Kundu, Generalized exponential distribution: Different method of estimations, J. Stat. Comput. Simul., 69 (2001), 315–337. https://doi.org/10.1080/00949650108812098 doi: 10.1080/00949650108812098
![]() |
[16] |
M. C. Jones, Families of distributions arising from distributions of order statistics, Test, 13 (2004), 1–43. https://doi.org/10.1007/BF02602999 doi: 10.1007/BF02602999
![]() |
[17] |
M. S. Khan, R. King, I. L. Hudson, Transmuted generalized exponential distribution: A generalization of the exponential distribution with applications to survival data, Commun. Statist., 46 (2017), 4377–4398. https://doi.org/10.1080/03610918.2015.1118503 doi: 10.1080/03610918.2015.1118503
![]() |
[18] |
A. Z. Afify, G. M. Cordeiro, H. M. Yousof, Z. M. Nofal, A. Alzaatreh, The Kumaraswamy transmuted-G family of distributions: Properties and applications, J. Data Sci., 14 (2016), 245–270. https://doi.org/10.6339/JDS.201604_14(2).0004 doi: 10.6339/JDS.201604_14(2).0004
![]() |
[19] |
A. J. Lemonte, G. M. Cordeiro, G. Moreno-Arenas, A new useful three-parameter extension of the exponential distribution, Statistics, 50 (2016), 312–337. https://doi.org/10.1080/02331888.2015.1095190 doi: 10.1080/02331888.2015.1095190
![]() |
[20] |
A. Mahdavi, D. Kundu, A new method for generating distributions with an application to exponential distribution, Comm. Statist. Theory Methods, 46 (2017), 6543–6557. https://doi.org/10.1080/03610926.2015.1130839 doi: 10.1080/03610926.2015.1130839
![]() |
[21] |
M. Nassar, A. Z. Afify, M. K. Shakhatreh, Estimation methods of alpha power exponential distribution with applications to engineering and medical data, Pak. J. Stat. Oper. Res., 16 (2020), 149–166. http://dx.doi.org/10.18187/pjsor.v16i1.3129 doi: 10.18187/pjsor.v16i1.3129
![]() |
[22] |
A. Z. Afify, O. A. Mohamed, A new three-parameter exponential distribution with variable shapes for the hazard rate: Estimation and applications, Mathematics, 8 (2020), 135. https://doi.org/10.3390/math8010135 doi: 10.3390/math8010135
![]() |
[23] |
M. Nassar, D. Kumar, S. Dey, G. M. Cordeiro, A. Z. Afify, The Marshall-Olkin alpha power family of distributions with applications, J. Comput. Appl. Math., 351 (2019), 41–53. https://doi.org/10.1016/j.cam.2018.10.052 doi: 10.1016/j.cam.2018.10.052
![]() |
[24] | M. A. Aldahlan, A. Z. Afify, A new three-parameter exponential distribution with applications in reliability and engineering, J. Nonlinear Sci. Appl., 13 (2020), 258–269. |
[25] | S. Abbas, A. Jahngeer, S. H. Shahbaz, A. Z. Afify, M. Q. Shahbaz, Topp-Leone moment exponential distribution: Properties and applications, J. Natl. Sci. Found. Sri Lanka, 48 (2020), 265–274. |
[26] |
Y. Lan, L. M. Leemis, The logistic-exponential survival distribution, Naval Res. Logist., 55 (2008), 252–264. https://doi.org/10.1002/nav.20279 doi: 10.1002/nav.20279
![]() |
[27] | M. Shaked, J. G. Shanthikumar, Stochastic orders, New York: Springer, 2007. https://doi.org/10.1007/978-0-387-34675-5 |
[28] | A. Zellner, Bayesian estimation and prediction using asymmetric loss functions, J. Amer. Statist. Assoc., 81 (1986), 446–451. |
[29] |
S. Dey, S. Ali, C. Park, Weighted exponential distribution: Properties and different methods of estimation, J. Stat. Comput. Simul., 85 (2015), 3641–3661. https://doi.org/10.1080/00949655.2014.992346 doi: 10.1080/00949655.2014.992346
![]() |
[30] |
D. Kundu, H. Howlader, Bayesian inference and prediction of the inverse Weibull distribution for Type-Ⅱ censored data, Comput. Statist. Data Anal., 54 (2010), 1547–1558. https://doi.org/10.1016/j.csda.2010.01.003 doi: 10.1016/j.csda.2010.01.003
![]() |
[31] | L. Tierney, Markov chains for exploring posterior distributions, Ann. Statist., 22 (1994), 1701–1728. |
[32] | A. Gelman, J. B. Carlin, H. S. Stern, D. B. Rubin, Bayesian data analysis, New York: Chapman and Hall/CRC, 1995. https://doi.org/10.1201/9780429258411 |
[33] |
G. O. Roberts, J. S. Rosenthal, Optimal scaling of discrete approximations to Langevin diffusions, J. R. Stat. Soc. Ser. B Stat. Methodol., 60 (1998), 255–268. https://doi.org/10.1111/1467-9868.00123 doi: 10.1111/1467-9868.00123
![]() |
[34] |
R. C. H. Cheng, N. A. K. Amin, Estimating parameters in continuous univariate distributions with a shifted origin, J. R. Stat. Soc. Ser. B Stat. Methodol, 45 (1983), 394–403. https://doi.org/10.1111/j.2517-6161.1983.tb01268.x doi: 10.1111/j.2517-6161.1983.tb01268.x
![]() |
[35] | B. Ranneby, The maximum spacing method. An estimation method related to the maximum likelihood method, Scand. J. Statist., 11 (1984), 93–112. |
[36] | E. J. G. Pitman, Some basic theory for statistical inference: Monographs on applied probability and statistics, New York: Chapman & Hall, 1979. https://doi.org/10.1201/9781351076777 |
[37] |
R. C. H. Cheng, L. Traylor, Non-regular maximum likelihood problems, J. R. Stat. Soc. Ser. B Stat. Methodol., 57 (1995), 3–24. https://doi.org/10.1111/j.2517-6161.1995.tb02013.x doi: 10.1111/j.2517-6161.1995.tb02013.x
![]() |
[38] |
K. Ghosh, S. R. Jammalamadaka, A general estimation method using spacings, J. Statist. Plann. Inference, 93 (2001), 71–82. https://doi.org/10.1016/S0378-3758(00)00160-9 doi: 10.1016/S0378-3758(00)00160-9
![]() |
[39] |
S. Anatolyev, G. Kosenok, An alternative to maximum likelihood based on spacings, Econometric Theory, 21 (2005), 472–476. https://doi.org/10.1017/S0266466605050255 doi: 10.1017/S0266466605050255
![]() |
[40] | R. Koenker, G. Bassett, Regression quantiles, Econometrica, 46 (1978), 33–50. |
[41] | P. K. Dunn, G. K. Smyth, Randomized quantile residuals, J. Comput. Graph. Statist., 5 (1996), 236–244. |
[42] |
D. R. Cox, E. J. Snell, A general definition of residuals, J. R. Stat. Soc. Ser. B Stat. Methodol., 30 (1968), 248–265. https://doi.org/10.1111/j.2517-6161.1968.tb00724.x doi: 10.1111/j.2517-6161.1968.tb00724.x
![]() |
[43] |
A. C. Atkinson, Two graphical displays for outlying and influential observations in regression, Biometrika, 68 (1981), 13–20. https://doi.org/10.1093/biomet/68.1.13 doi: 10.1093/biomet/68.1.13
![]() |
[44] |
P. Zhang, Z. Qiu, C. Shi, simplexreg: An R package for regression analysis of proportional data using the simplex distribution, J. Statist. Softw., 71 (2016), 1–21. https://doi.org/10.18637/jss.v071.i11 doi: 10.18637/jss.v071.i11
![]() |
[45] | J. T. Schmit, K. Roth, Cost effectiveness of risk management practices, J. Risk Insurance, 57 (1990), 455–470. |
n | Est. | ML | MPSE | Bayesian | LSE | |||||||
\beta | \theta | \beta | \theta | \beta | \theta | \beta | \theta | |||||
MSEs | 0.0037 | 0.148 | 0.0037 | 0.142 | 0.00349327 | 0.076 | 0.0037 | 0.127 | ||||
50 | ABs | 0.0019 | 0.073 | 0.0024 | -0.130 | 0.006 | 0.056 | -0.002 | -0.105 | |||
AWs | 0.228 | 1.441 | 0.239 | 1.352 | 0.229 | 1.28257 | 0.248 | 1.517 | ||||
CPs | 0.936 | 0.942 | 0.945 | 0.88 | 0.947 | 0.973 | 0.96 | 0.93 | ||||
MSEs | 0.0017 | 0.067 | 0.0016 | 0.071 | 0.0017 | 0.054 | 0.0019 | 0.063 | ||||
100 | ABs | 0.00027 | -0.0023 | -0.0014 | -0.103 | 0.0019 | 0.0077 | 0.007 | -0.027 | |||
AWs | 0.1629 | 0.995 | 0.166 | 0.964 | 0.163 | 0.943 | 0.170 | 1.115 | ||||
CPs | 0.95 | 0.942 | 0.955 | 0.893 | 0.953 | 0.953 | 0.925 | 0.965 | ||||
MSEs | 0.0011 | 0.043 | 0.001 | 0.049 | 0.0012 | 0.035 | 0.001 | 0.048 | ||||
150 | ABs | 0.0047 | 0.016 | 0.004 | -0.090 | 0.008 | -0.011 | 0.0001 | 0.003 | |||
AWs | 0.133 | 0.817 | 0.136 | 0.790 | 0.133 | 0.775481 | 0.135 | 0.925 | ||||
CPs | 0.958 | 0.952 | 0.968 | 0.905 | 0.973 | 0.967 | 0.935 | 0.965 |
n | Est. | ML | MPSE | Bayesian | LSE | |||||||
\beta | \theta | \beta | \theta | \beta | \theta | \beta | \theta | |||||
MSEs | 0.0108 | 0.005 | 0.011 | 0.005 | 0.027 | 0.0124 | 0.026 | 0.007 | ||||
50 | ABs | 0.0217 | 0.009 | 0.020 | -0.021 | 0.069 | 0.017 | 0.020 | 0.0006 | |||
AWs | 0.425 | 0.306 | 0.404 | 0.268 | 0.654 | 0.419 | 0.589 | 0.330 | ||||
CPs | 0.976 | 0.956 | 0.973 | 0.918 | 0.973 | 0.973 | 0.91 | 0.93 | ||||
MSEs | 0.0055 | 0.003 | 0.005 | 0.003 | 0.0134 | 0.007 | 0.011 | 0.004 | ||||
100 | ABs | 0.009 | -0.0004 | 0.004 | -0.018 | 0.032 | 0.011 | 0.011 | -0.003 | |||
AWs | 0.286 | 0.206 | 0.278 | 0.194 | 0.424 | 0.289 | 0.391 | 0.235 | ||||
CPs | 0.942 | 0.946 | 0.94 | 0.896 | 0.953 | 0.947 | 0.93 | 0.95 | ||||
MSEs | 0.0034 | 0.0017 | 0.003 | 0.002 | 0.005 | 0.0019 | 0.0057 | 0.0022 | ||||
150 | ABs | 0.0121 | -0.0003 | 0.002 | -0.013 | 0.018 | 0.0005 | 0.013 | -0.005 | |||
AWs | 0.231 | 0.166 | 0.226 | 0.160 | 0.236 | 0.168 | 0.317 | 0.191 | ||||
CPs | 0.968 | 0.958 | 0.963 | 0.93 | 0.887 | 0.933 | 0.96 | 0.93 |
n | Est. | ML | MPSE | Bayesian | LSE | |||||||
\beta | \theta | \beta | \theta | \beta | \theta | \beta | \theta | |||||
MSEs | 0.521 | 0.004 | 0.523 | 0.003 | 0.103 | 0.003 | 2.276 | 0.005 | ||||
50 | ABs | 0.163 | 0.007 | 0.161 | -0.018 | 0.035 | 0.012 | 0.242 | -0.0009 | |||
AWs | 3.009 | 0.254 | 2.831 | 0.220 | 1.960 | 0.218 | 4.488 | 0.286 | ||||
CPs | 0.976 | 0.954 | 0.973 | 0.905 | 1. | 0.953 | 0.92 | 0.92 | ||||
MSEs | 0.2443 | 0.002 | 0.264 | 0.002 | 0.138 | 0.0017 | 0.669 | 0.003 | ||||
100 | ABs | 0.067 | -0.0009 | 0.032 | -0.014 | 0.023 | 0.007 | 0.097 | -0.002 | |||
AWs | 2.006 | 0.170 | 1.934 | 0.159 | 1.588 | 0.158 | 2.981 | 0.202 | ||||
CPs | 0.968 | 0.948 | 0.935 | 0.905 | 0.973 | 0.967 | 0.93 | 0.95 | ||||
MSEs | 0.178 | 0.001 | 0.154 | 0.001 | 0.094 | 0.001 | 0.309 | 0.002 | ||||
150 | ABs | 0.082 | 0.00017 | 0.017 | -0.010 | 0.019 | 0.003 | -0.005 | 0.005 | |||
AWs | 1.616 | 0.137 | 1.577 | 0.132 | 1.379 | 0.131 | 2.297 | 0.166 | ||||
CPs | 0.95 | 0.946 | 0.965 | 0.925 | 0.967 | 0.94 | 0.95 | 0.95 |
n | Est. | ML | MPSE | Bayesian | LSE | |||||||
\beta | \theta | \beta | \theta | \beta | \theta | \beta | \theta | |||||
MSEs | 0.0017 | 0.020 | 0.002 | 0.018 | 0.001 | 0.0133 | 0.002 | 0.021 | ||||
50 | ABs | 0.006 | 0.024 | 0.007 | -0.045 | 0.006 | 0.011 | 0.003 | -0.009 | |||
AWs | 0.158 | 0.527 | 0.161 | 0.484 | 0.162 | 0.518 | 0.178 | 0.551 | ||||
CPs | 0.956 | 0.944 | 0.958 | 0.903 | 0.973 | 0.96 | 0.91 | 0.93 | ||||
MSEs | 0.0008 | 0.009 | 0.0007 | 0.010 | 0.0010 | 0.008 | 0.001 | 0.011 | ||||
100 | ABs | 0.002 | 0.0004 | -0.00004 | -0.036 | 0.008 | 0.0001 | 0.002 | -0.012 | |||
AWs | 0.110 | 0.362 | 0.110 | 0.347 | 0.113 | 0.360 | 0.123 | 0.395 | ||||
CPs | 0.958 | 0.936 | 0.96 | 0.898 | 0.933 | 0.953 | 0.93 | 0.95 | ||||
MSEs | 0.0006 | 0.006 | 0.0005 | 0.006 | 0.0005 | 0.007 | 0.0007 | 0.007 | ||||
150 | ABs | 0.0045 | 0.003 | 0.0002 | -0.026 | 0.0043 | 0.0011 | 0.0004 | -0.013 | |||
AWs | 0.0899 | 0.295 | 0.090 | 0.287 | 0.090 | 0.295 | 0.098 | 0.319 | ||||
CPs | 0.95 | 0.956 | 0.955 | 0.925 | 0.953 | 0.92 | 0.92 | 0.92 |
n | Est. | ML | MPSE | Bayesian | LSE | |||||||
\beta | \theta | \beta | \theta | \beta | \theta | \beta | \theta | |||||
MSEs | 0.128 | 0.012 | 0.130 | 0.011 | 0.063 | 0.011 | 0.210 | 0.015 | ||||
50 | ABs | 0.054 | 0.018 | 0.061 | -0.036 | 0.020 | 0.021 | 0.037 | -0.005 | |||
AWs | 1.397 | 0.438 | 1.411 | 0.400 | 1.226 | 0.420 | 1.672 | 0.461 | ||||
CPs | 0.948 | 0.952 | 0.973 | 0.898 | 0.973 | 0.973 | 0.91 | 0.93 | ||||
MSEs | 0.0609 | 0.006 | 0.059 | 0.007 | 0.051 | 0.006 | 0.098 | 0.008 | ||||
100 | ABs | 0.022 | -0.0007 | 0.005 | -0.028 | 0.036 | 0.004 | 0.022 | -0.008 | |||
AWs | 0.972 | 0.301 | 0.968 | 0.288 | 0.924 | 0.297 | 1.144 | 0.329 | ||||
CPs | 0.952 | 0.938 | 0.948 | 0.91 | 0.947 | 0.953 | 0.93 | 0.96 | ||||
MSEs | 0.040 | 0.004 | 0.039 | 0.004 | 0.036 | 0.004 | 0.049 | 0.005 | ||||
150 | ABs | 0.0408 | 0.0007 | 0.003 | -0.0201 | 0.050 | 0.004 | 0.033 | -0.007 | |||
AWs | 0.795 | 0.245 | 0.785 | 0.238 | 0.771 | 0.242 | 0.929 | 0.271 | ||||
CPs | 0.964 | 0.958 | 0.958 | 0.928 | 0.96 | 0.953 | 0.96 | 0.93 |
\tau | n | \theta | \delta_0 | \delta_1 | \delta_2 | |||||||||||||||
MSEs | ABs | AWs | CPs | MSEs | ABs | AWs | CPs | MSEs | ABs | AWs | CPs | MSEs | ABs | AWs | CPs | |||||
0.1 | 50 | 0.007 | 0.016 | 0.303 | 0.943 | 0.097 | 0.033 | 1.202 | 0.943 | 0.251 | 0.080 | 1.960 | 0.951 | 0.051 | 0.023 | 0.871 | 0.96 | |||
100 | 0.003 | 0.006 | 0.207 | 0.966 | 0.053 | 0.012 | 0.835 | 0.946 | 0.102 | 0.012 | 1.252 | 0.937 | 0.038 | 0.010 | 0.725 | 0.946 | ||||
150 | 0.002 | -0.002 | 0.167 | 0.946 | 0.035 | 0.024 | 0.705 | 0.931 | 0.072 | 0.004 | 1.009 | 0.926 | 0.024 | 0.011 | 0.593 | 0.963 | ||||
0.25 | 50 | 0.008 | 0.018 | 0.304 | 0.908 | 0.092 | 0.063 | 1.202 | 0.944 | 0.281 | -0.033 | 1.986 | 0.952 | 0.0553 | 0.016 | 0.882 | 0.944 | |||
100 | 0.003 | 0.004 | 0.206 | 0.944 | 0.053 | 0.023 | 0.833 | 0.924 | 0.0998 | 0.016 | 1.243 | 0.964 | 0.038 | 0.010 | 0.710 | 0.944 | ||||
150 | 0.002 | -0.002 | 0.167 | 0.932 | 0.033 | 0.006 | 0.703 | 0.948 | 0.061 | 0.003 | 1.013 | 0.952 | 0.024 | 0.008 | 0.604 | 0.944 | ||||
0.5 | 50 | 0.008 | 0.015 | 0.300 | 0.912 | 0.121 | 0.047 | 1.381 | 0.948 | 0.293 | -0.031 | 2.006 | 0.956 | 0.058 | 0.023 | 0.897 | 0.936 | |||
100 | 0.003 | 0.004 | 0.205 | 0.94 | 0.065 | 0.023 | 0.942 | 0.936 | 0.099 | 0.016 | 1.246 | 0.968 | 0.039 | 0.011 | 0.716 | 0.936 | ||||
150 | 0.002 | -0.002 | 0.166 | 0.928 | 0.042 | 0.016 | 0.789 | 0.944 | 0.062 | 0.005 | 1.015 | 0.952 | 0.024 | 0.007 | 0.609 | 0.948 | ||||
0.75 | 50 | 0.006 | 0.016 | 0.273 | 0.928 | 0.437 | -0.026 | 2.367 | 0.932 | 0.269 | -0.017 | 2.0686 | 0.968 | 0.081 | 0.077 | 0.991 | 0.932 | |||
100 | 0.002 | 0.007 | 0.192 | 0.96 | 0.163 | 0.005 | 1.614 | 0.976 | 0.120 | -0.037 | 1.285 | 0.952 | 0.047 | 0.050 | 0.794 | 0.952 | ||||
150 | 0.002 | -0.001 | 0.153 | 0.932 | 0.126 | 0.041 | 1.310 | 0.924 | 0.060 | -0.018 | 1.033 | 0.984 | 0.032 | 0.026 | 0.648 | 0.94 | ||||
0.9 | 50 | 0.006 | 0.048 | 0.256 | 0.952 | 0.810 | -0.420 | 3.339 | 0.968 | 0.260 | 0.014 | 2.213 | 0.964 | 0.097 | 0.060 | 1.181 | 0.952 | |||
100 | 0.004 | 0.041 | 0.187 | 0.852 | 0.586 | -0.403 | 2.327 | 0.876 | 0.156 | 0.002 | 1.396 | 0.94 | 0.082 | 0.040 | 0.958 | 0.94 | ||||
150 | 0.003 | 0.036 | 0.148 | 0.876 | 0.329 | -0.313 | 1.836 | 0.892 | 0.079 | -0.017 | 1.104 | 0.96 | 0.047 | -0.024 | 0.759 | 0.932 |
\tau | n | \theta | \delta_0 | \delta_1 | |||||||||||
MSEs | ABs | AWs | CPs | MSEs | ABs | AWs | CPs | MSEs | ABs | AWs | CPs | ||||
0.1 | 50 | 0.007 | 0.011 | 0.296 | 0.916 | 0.338 | -0.025 | 2.504 | 0.968 | 0.480 | 0.078 | 2.831 | 0.968 | ||
100 | 0.003 | 0.005 | 0.206 | 0.956 | 0.158 | 0.013 | 1.592 | 0.948 | 0.199 | 0.020 | 1.895 | 0.972 | |||
150 | 0.002 | -0.002 | 0.167 | 0.948 | 0.103 | 0.0003 | 1.336 | 0.956 | 0.150 | -0.002 | 1.569 | 0.948 | |||
0.25 | 50 | 0.007 | 0.012 | 0.297 | 0.908 | 0.349 | -0.046 | 2.520 | 0.976 | 0.518 | 0.099 | 2.846 | 0.968 | ||
100 | 0.003 | 0.005 | 0.206 | 0.952 | 0.188 | -0.007 | 1.602 | 0.912 | 0.230 | 0.045 | 1.905 | 0.956 | |||
150 | 0.002 | -0.001 | 0.167 | 0.920 | 0.118 | -0.020 | 1.344 | 0.972 | 0.163 | 0.030 | 1.574 | 0.948 | |||
0.5 | 50 | 0.007 | 0.016 | 0.299 | 0.929 | 0.425 | -0.030 | 2.806 | 0.954 | 0.440 | 0.039 | 2.889 | 0.974 | ||
100 | 0.003 | 0.004 | 0.206 | 0.954 | 0.217 | -0.016 | 1.837 | 0.940 | 0.240 | 0.017 | 1.954 | 0.960 | |||
150 | 0.002 | -0.002 | 0.167 | 0.951 | 0.153 | 0.010 | 1.519 | 0.943 | 0.170 | 0.027 | 1.596 | 0.949 | |||
0.75 | 50 | 0.007 | 0.012 | 0.298 | 0.936 | 2.089 | -0.254 | 5.308 | 0.944 | 0.974 | 0.242 | 3.753 | 0.940 | ||
100 | 0.003 | 0.005 | 0.206 | 0.952 | 0.650 | -0.064 | 3.424 | 0.956 | 0.346 | 0.061 | 2.360 | 0.936 | |||
150 | 0.002 | -0.003 | 0.166 | 0.936 | 0.561 | 0.012 | 2.785 | 0.928 | 0.237 | 0.031 | 1.899 | 0.936 | |||
0.9 | 50 | 0.007 | 0.021 | 0.309 | 0.928 | 4.413 | 0.488 | 3.186 | 0.898 | 1.563 | -0.764 | 1.728 | 0.902 | ||
100 | 0.003 | 0.008 | 0.214 | 0.980 | 1.707 | 0.817 | 2.539 | 0.902 | 0.986 | -0.893 | 1.068 | 0.880 | |||
150 | 0.002 | 0.007 | 0.173 | 0.960 | 1.518 | 0.873 | 1.606 | 0.926 | 0.989 | -0.925 | 0.630 | 0.928 |
Model | Estimates (Standard Errors) | K-S (p-value) | AIC | BIC | -Loglik | W^\star | A^\star | |
\beta | \theta | |||||||
ULE | 3.2454 | 1.77959 | 0.0500323 | -383.997 | -377.044 | -193.998 | 0.0675539 | 0.48204 |
(0.142222) | (0.100898) | (0.587827) | ||||||
Beta | 2.28593 | 8.66714 | 0.0650045 | -379.734 | -372.782 | -191.867 | 0.14045 | 0.87598 |
(0.458761) | (0.332228) | (0.264737) | ||||||
Kumaraswamy | 2.43553 | 6.69423 | 0.0722763 | -377.528 | -370.575 | -190.764 | 0.191987 | 1.14739 |
(0.458761) | (0.332228) | (0.16457) | ||||||
Unit-Omega | 3.6080 | 7.7337 | 0.05331 | -383.955 | -377.002 | -193.977 | 0.0820 | 0.5276 |
(0.3322) | (0.4588) | (0.5055) | ||||||
Unit- BurrXII | 1.73211 | 10.076 | 0.0522282 | -383.005 | -376.052 | -193.503 | 0.0888576 | 0.582411 |
(0.458761) | (0.332228) | (0.532105) |
Parameter | MLE | MPSE | BSE | LSE | |||||||
MLE | ACI_{ML} | MPSE | ACI_{MPS} | BSE | CrI | LSE | BCI | ||||
\theta | 1.77959 | (1.5818, 1.9774) | 1.8044 | (1.6114, 2.0062) | 1.7469 | (1.5527, 1.9412) | 1.8551 | (1.6090, 2.0701) | |||
\beta | 3.2454 | (2.9667, 3.5241) | 3.2254 | (2.9628, 3.5129) | 3.2453 | (2.9631, 3.5275) | 3.1989 | (2.9563, 3.4995) |
Parameters | Beta | Kumaraswamy | ULE | ||||||||
Estimate | SE | p-value | Estimate | SE | p-value | Estimate | SE | p-value | |||
\delta_0 | 0.9990 | 0.1291 | < 0.0000 | 1.1997 | 0.1397 | < 0.0000 | 1.04811 | 0.135555 | < 0.0000 | ||
\delta_1 | 0.0659 | 0.0939 | 0.483 | 0.0418 | 0.0955 | 0.6619 | 0.100013 | 0.07255 | 0.169365 | ||
\delta_2 | 0.2116 | 0.1038 | 0.0425 | 0.1833 | 0.1150 | 0.1123 | 0.21845 | 0.104438 | .037553 | ||
\delta_3 | 0.0142 | 0.0054 | 0.0088 | 0.0107 | 0.0059 | 0.0692 | 0.01709 | 0.00547946 | 0.002043 | ||
\alpha | 11.3447 | 1.0181 | < 0.0000 | 6.7274 | 0.4543 | < 0.0000 | 1.82216 | 0.10317 | < 0.0000 | ||
AIC | -381.79 | -375.66 | -388.659 | ||||||||
BIC | -364.41 | -358.28 | -371.276 |
Model | Estimates (Standard Errors) | K-S (p-value) | AIC | BIC | -Loglik | Cramér-von Mises | Anderson-Darling | |
\beta | \theta | |||||||
ULE | 0.257 | 2.582 | 0.068 | -170.611 | -166.03 | -87.306 | 0.096 | 0.874 |
(0.142) | (0.101) | (0.893) | ||||||
Beta | 3.798 | 0.613 | 0.181 | -148.235 | -143.654 | -76.118 | 0.697 | 3.961 |
(0.459) | (0.332) | (0.017) | ||||||
Kumaraswamy | 3.441 | 0.665 | 0.154 | -153.308 | -148.727 | -78.654 | 0.502 | 3.097 |
(0.459) | (0.332) | (0.064) | ||||||
Unit Omega | 5.377 | 0.787 | 0.128 | -163.383 | -158.802 | -83.691 | 0.336 | 2.153 |
(0.928) | (0.073) | (0.180) | ||||||
Unit-BurrXII | 0.348 | 2.841 | 0.338 | -89.013 | -84.432 | -46.507 | 2.662 | 12.874 |
(0.0625) | (0.421) | (0.0) |
Parameter | MLE | MPSE | BSE | LSE | |||||||
MLE | ACI_{ML} | MPSE | ACI_{MPS} | BSE | CrI | LSE | BCI | ||||
\theta | 2.5823 | (2.3845, 2.7800) | 2.4194 | (1.9312, 2.9077) | 2.5500 | (2.0647, 3.0615) | 3.0267 | (2.4038, 3.6891) | |||
\beta | 0.2572 | (-0.0215, 0.5359 | 0.2579 | (0.2290, 0.2868) | 0.2581 | (0.2314, 0.2873) | 0.2536 | (0.2303, 0.279) |
Parameters | Beta | Kumaraswamy | ULE | ||||||||
Estimate | SE | p-value | Estimate | SE | p-value | Estimate | SE | p-value | |||
\delta_0 | 1.888 | 1.172 | 0.112 | 2.539 | 1.550 | 0.106 | 3.823 | 1.463 | 0.011 | ||
\delta_1 | -0.001 | 0.014 | 0.932 | -0.036 | 0.018 | 0.043 | -0.014 | 0.012 | 0.232 | ||
\delta_2 | 0.178 | 0.232 | 0.445 | 0.596 | 0.390 | 0.131 | 0.142 | 0.250 | 0.573 | ||
\delta_3 | -0.512 | 0.123 | < 0.000 | -0.798 | 0.161 | < 0.000 | -0.872 | 0.149 | < 0.000 | ||
\delta_4 | 1.236 | 0.459 | 0.009 | 5.257 | 1.436 | 0.0005 | 1.799 | 0.444 | 0.0001 | ||
\delta_5 | -0.012 | 0.088 | 0.890 | -0.028 | 0.120 | 0.817 | -0.070 | 0.101 | 0.493 | ||
\delta_6 | -0.004 | 0.021 | 0.855 | -0.027 | 0.032 | 0.396 | 0.006 | 0.024 | 0.808 | ||
\alpha | 6.331 | 1.123 | < 0.000 | 0.978 | 0.106 | < 0.000 | 3.503 | 0.357 | < 0.000 | ||
AIC | -159.446 | -181.653 | -200.186 | ||||||||
BIC | -141.122 | -163.33 | -181.863 |
n | Est. | ML | MPSE | Bayesian | LSE | |||||||
\beta | \theta | \beta | \theta | \beta | \theta | \beta | \theta | |||||
MSEs | 0.0037 | 0.148 | 0.0037 | 0.142 | 0.00349327 | 0.076 | 0.0037 | 0.127 | ||||
50 | ABs | 0.0019 | 0.073 | 0.0024 | -0.130 | 0.006 | 0.056 | -0.002 | -0.105 | |||
AWs | 0.228 | 1.441 | 0.239 | 1.352 | 0.229 | 1.28257 | 0.248 | 1.517 | ||||
CPs | 0.936 | 0.942 | 0.945 | 0.88 | 0.947 | 0.973 | 0.96 | 0.93 | ||||
MSEs | 0.0017 | 0.067 | 0.0016 | 0.071 | 0.0017 | 0.054 | 0.0019 | 0.063 | ||||
100 | ABs | 0.00027 | -0.0023 | -0.0014 | -0.103 | 0.0019 | 0.0077 | 0.007 | -0.027 | |||
AWs | 0.1629 | 0.995 | 0.166 | 0.964 | 0.163 | 0.943 | 0.170 | 1.115 | ||||
CPs | 0.95 | 0.942 | 0.955 | 0.893 | 0.953 | 0.953 | 0.925 | 0.965 | ||||
MSEs | 0.0011 | 0.043 | 0.001 | 0.049 | 0.0012 | 0.035 | 0.001 | 0.048 | ||||
150 | ABs | 0.0047 | 0.016 | 0.004 | -0.090 | 0.008 | -0.011 | 0.0001 | 0.003 | |||
AWs | 0.133 | 0.817 | 0.136 | 0.790 | 0.133 | 0.775481 | 0.135 | 0.925 | ||||
CPs | 0.958 | 0.952 | 0.968 | 0.905 | 0.973 | 0.967 | 0.935 | 0.965 |
n | Est. | ML | MPSE | Bayesian | LSE | |||||||
\beta | \theta | \beta | \theta | \beta | \theta | \beta | \theta | |||||
MSEs | 0.0108 | 0.005 | 0.011 | 0.005 | 0.027 | 0.0124 | 0.026 | 0.007 | ||||
50 | ABs | 0.0217 | 0.009 | 0.020 | -0.021 | 0.069 | 0.017 | 0.020 | 0.0006 | |||
AWs | 0.425 | 0.306 | 0.404 | 0.268 | 0.654 | 0.419 | 0.589 | 0.330 | ||||
CPs | 0.976 | 0.956 | 0.973 | 0.918 | 0.973 | 0.973 | 0.91 | 0.93 | ||||
MSEs | 0.0055 | 0.003 | 0.005 | 0.003 | 0.0134 | 0.007 | 0.011 | 0.004 | ||||
100 | ABs | 0.009 | -0.0004 | 0.004 | -0.018 | 0.032 | 0.011 | 0.011 | -0.003 | |||
AWs | 0.286 | 0.206 | 0.278 | 0.194 | 0.424 | 0.289 | 0.391 | 0.235 | ||||
CPs | 0.942 | 0.946 | 0.94 | 0.896 | 0.953 | 0.947 | 0.93 | 0.95 | ||||
MSEs | 0.0034 | 0.0017 | 0.003 | 0.002 | 0.005 | 0.0019 | 0.0057 | 0.0022 | ||||
150 | ABs | 0.0121 | -0.0003 | 0.002 | -0.013 | 0.018 | 0.0005 | 0.013 | -0.005 | |||
AWs | 0.231 | 0.166 | 0.226 | 0.160 | 0.236 | 0.168 | 0.317 | 0.191 | ||||
CPs | 0.968 | 0.958 | 0.963 | 0.93 | 0.887 | 0.933 | 0.96 | 0.93 |
n | Est. | ML | MPSE | Bayesian | LSE | |||||||
\beta | \theta | \beta | \theta | \beta | \theta | \beta | \theta | |||||
MSEs | 0.521 | 0.004 | 0.523 | 0.003 | 0.103 | 0.003 | 2.276 | 0.005 | ||||
50 | ABs | 0.163 | 0.007 | 0.161 | -0.018 | 0.035 | 0.012 | 0.242 | -0.0009 | |||
AWs | 3.009 | 0.254 | 2.831 | 0.220 | 1.960 | 0.218 | 4.488 | 0.286 | ||||
CPs | 0.976 | 0.954 | 0.973 | 0.905 | 1. | 0.953 | 0.92 | 0.92 | ||||
MSEs | 0.2443 | 0.002 | 0.264 | 0.002 | 0.138 | 0.0017 | 0.669 | 0.003 | ||||
100 | ABs | 0.067 | -0.0009 | 0.032 | -0.014 | 0.023 | 0.007 | 0.097 | -0.002 | |||
AWs | 2.006 | 0.170 | 1.934 | 0.159 | 1.588 | 0.158 | 2.981 | 0.202 | ||||
CPs | 0.968 | 0.948 | 0.935 | 0.905 | 0.973 | 0.967 | 0.93 | 0.95 | ||||
MSEs | 0.178 | 0.001 | 0.154 | 0.001 | 0.094 | 0.001 | 0.309 | 0.002 | ||||
150 | ABs | 0.082 | 0.00017 | 0.017 | -0.010 | 0.019 | 0.003 | -0.005 | 0.005 | |||
AWs | 1.616 | 0.137 | 1.577 | 0.132 | 1.379 | 0.131 | 2.297 | 0.166 | ||||
CPs | 0.95 | 0.946 | 0.965 | 0.925 | 0.967 | 0.94 | 0.95 | 0.95 |
n | Est. | ML | MPSE | Bayesian | LSE | |||||||
\beta | \theta | \beta | \theta | \beta | \theta | \beta | \theta | |||||
MSEs | 0.0017 | 0.020 | 0.002 | 0.018 | 0.001 | 0.0133 | 0.002 | 0.021 | ||||
50 | ABs | 0.006 | 0.024 | 0.007 | -0.045 | 0.006 | 0.011 | 0.003 | -0.009 | |||
AWs | 0.158 | 0.527 | 0.161 | 0.484 | 0.162 | 0.518 | 0.178 | 0.551 | ||||
CPs | 0.956 | 0.944 | 0.958 | 0.903 | 0.973 | 0.96 | 0.91 | 0.93 | ||||
MSEs | 0.0008 | 0.009 | 0.0007 | 0.010 | 0.0010 | 0.008 | 0.001 | 0.011 | ||||
100 | ABs | 0.002 | 0.0004 | -0.00004 | -0.036 | 0.008 | 0.0001 | 0.002 | -0.012 | |||
AWs | 0.110 | 0.362 | 0.110 | 0.347 | 0.113 | 0.360 | 0.123 | 0.395 | ||||
CPs | 0.958 | 0.936 | 0.96 | 0.898 | 0.933 | 0.953 | 0.93 | 0.95 | ||||
MSEs | 0.0006 | 0.006 | 0.0005 | 0.006 | 0.0005 | 0.007 | 0.0007 | 0.007 | ||||
150 | ABs | 0.0045 | 0.003 | 0.0002 | -0.026 | 0.0043 | 0.0011 | 0.0004 | -0.013 | |||
AWs | 0.0899 | 0.295 | 0.090 | 0.287 | 0.090 | 0.295 | 0.098 | 0.319 | ||||
CPs | 0.95 | 0.956 | 0.955 | 0.925 | 0.953 | 0.92 | 0.92 | 0.92 |
n | Est. | ML | MPSE | Bayesian | LSE | |||||||
\beta | \theta | \beta | \theta | \beta | \theta | \beta | \theta | |||||
MSEs | 0.128 | 0.012 | 0.130 | 0.011 | 0.063 | 0.011 | 0.210 | 0.015 | ||||
50 | ABs | 0.054 | 0.018 | 0.061 | -0.036 | 0.020 | 0.021 | 0.037 | -0.005 | |||
AWs | 1.397 | 0.438 | 1.411 | 0.400 | 1.226 | 0.420 | 1.672 | 0.461 | ||||
CPs | 0.948 | 0.952 | 0.973 | 0.898 | 0.973 | 0.973 | 0.91 | 0.93 | ||||
MSEs | 0.0609 | 0.006 | 0.059 | 0.007 | 0.051 | 0.006 | 0.098 | 0.008 | ||||
100 | ABs | 0.022 | -0.0007 | 0.005 | -0.028 | 0.036 | 0.004 | 0.022 | -0.008 | |||
AWs | 0.972 | 0.301 | 0.968 | 0.288 | 0.924 | 0.297 | 1.144 | 0.329 | ||||
CPs | 0.952 | 0.938 | 0.948 | 0.91 | 0.947 | 0.953 | 0.93 | 0.96 | ||||
MSEs | 0.040 | 0.004 | 0.039 | 0.004 | 0.036 | 0.004 | 0.049 | 0.005 | ||||
150 | ABs | 0.0408 | 0.0007 | 0.003 | -0.0201 | 0.050 | 0.004 | 0.033 | -0.007 | |||
AWs | 0.795 | 0.245 | 0.785 | 0.238 | 0.771 | 0.242 | 0.929 | 0.271 | ||||
CPs | 0.964 | 0.958 | 0.958 | 0.928 | 0.96 | 0.953 | 0.96 | 0.93 |
\tau | n | \theta | \delta_0 | \delta_1 | \delta_2 | |||||||||||||||
MSEs | ABs | AWs | CPs | MSEs | ABs | AWs | CPs | MSEs | ABs | AWs | CPs | MSEs | ABs | AWs | CPs | |||||
0.1 | 50 | 0.007 | 0.016 | 0.303 | 0.943 | 0.097 | 0.033 | 1.202 | 0.943 | 0.251 | 0.080 | 1.960 | 0.951 | 0.051 | 0.023 | 0.871 | 0.96 | |||
100 | 0.003 | 0.006 | 0.207 | 0.966 | 0.053 | 0.012 | 0.835 | 0.946 | 0.102 | 0.012 | 1.252 | 0.937 | 0.038 | 0.010 | 0.725 | 0.946 | ||||
150 | 0.002 | -0.002 | 0.167 | 0.946 | 0.035 | 0.024 | 0.705 | 0.931 | 0.072 | 0.004 | 1.009 | 0.926 | 0.024 | 0.011 | 0.593 | 0.963 | ||||
0.25 | 50 | 0.008 | 0.018 | 0.304 | 0.908 | 0.092 | 0.063 | 1.202 | 0.944 | 0.281 | -0.033 | 1.986 | 0.952 | 0.0553 | 0.016 | 0.882 | 0.944 | |||
100 | 0.003 | 0.004 | 0.206 | 0.944 | 0.053 | 0.023 | 0.833 | 0.924 | 0.0998 | 0.016 | 1.243 | 0.964 | 0.038 | 0.010 | 0.710 | 0.944 | ||||
150 | 0.002 | -0.002 | 0.167 | 0.932 | 0.033 | 0.006 | 0.703 | 0.948 | 0.061 | 0.003 | 1.013 | 0.952 | 0.024 | 0.008 | 0.604 | 0.944 | ||||
0.5 | 50 | 0.008 | 0.015 | 0.300 | 0.912 | 0.121 | 0.047 | 1.381 | 0.948 | 0.293 | -0.031 | 2.006 | 0.956 | 0.058 | 0.023 | 0.897 | 0.936 | |||
100 | 0.003 | 0.004 | 0.205 | 0.94 | 0.065 | 0.023 | 0.942 | 0.936 | 0.099 | 0.016 | 1.246 | 0.968 | 0.039 | 0.011 | 0.716 | 0.936 | ||||
150 | 0.002 | -0.002 | 0.166 | 0.928 | 0.042 | 0.016 | 0.789 | 0.944 | 0.062 | 0.005 | 1.015 | 0.952 | 0.024 | 0.007 | 0.609 | 0.948 | ||||
0.75 | 50 | 0.006 | 0.016 | 0.273 | 0.928 | 0.437 | -0.026 | 2.367 | 0.932 | 0.269 | -0.017 | 2.0686 | 0.968 | 0.081 | 0.077 | 0.991 | 0.932 | |||
100 | 0.002 | 0.007 | 0.192 | 0.96 | 0.163 | 0.005 | 1.614 | 0.976 | 0.120 | -0.037 | 1.285 | 0.952 | 0.047 | 0.050 | 0.794 | 0.952 | ||||
150 | 0.002 | -0.001 | 0.153 | 0.932 | 0.126 | 0.041 | 1.310 | 0.924 | 0.060 | -0.018 | 1.033 | 0.984 | 0.032 | 0.026 | 0.648 | 0.94 | ||||
0.9 | 50 | 0.006 | 0.048 | 0.256 | 0.952 | 0.810 | -0.420 | 3.339 | 0.968 | 0.260 | 0.014 | 2.213 | 0.964 | 0.097 | 0.060 | 1.181 | 0.952 | |||
100 | 0.004 | 0.041 | 0.187 | 0.852 | 0.586 | -0.403 | 2.327 | 0.876 | 0.156 | 0.002 | 1.396 | 0.94 | 0.082 | 0.040 | 0.958 | 0.94 | ||||
150 | 0.003 | 0.036 | 0.148 | 0.876 | 0.329 | -0.313 | 1.836 | 0.892 | 0.079 | -0.017 | 1.104 | 0.96 | 0.047 | -0.024 | 0.759 | 0.932 |
\tau | n | \theta | \delta_0 | \delta_1 | |||||||||||
MSEs | ABs | AWs | CPs | MSEs | ABs | AWs | CPs | MSEs | ABs | AWs | CPs | ||||
0.1 | 50 | 0.007 | 0.011 | 0.296 | 0.916 | 0.338 | -0.025 | 2.504 | 0.968 | 0.480 | 0.078 | 2.831 | 0.968 | ||
100 | 0.003 | 0.005 | 0.206 | 0.956 | 0.158 | 0.013 | 1.592 | 0.948 | 0.199 | 0.020 | 1.895 | 0.972 | |||
150 | 0.002 | -0.002 | 0.167 | 0.948 | 0.103 | 0.0003 | 1.336 | 0.956 | 0.150 | -0.002 | 1.569 | 0.948 | |||
0.25 | 50 | 0.007 | 0.012 | 0.297 | 0.908 | 0.349 | -0.046 | 2.520 | 0.976 | 0.518 | 0.099 | 2.846 | 0.968 | ||
100 | 0.003 | 0.005 | 0.206 | 0.952 | 0.188 | -0.007 | 1.602 | 0.912 | 0.230 | 0.045 | 1.905 | 0.956 | |||
150 | 0.002 | -0.001 | 0.167 | 0.920 | 0.118 | -0.020 | 1.344 | 0.972 | 0.163 | 0.030 | 1.574 | 0.948 | |||
0.5 | 50 | 0.007 | 0.016 | 0.299 | 0.929 | 0.425 | -0.030 | 2.806 | 0.954 | 0.440 | 0.039 | 2.889 | 0.974 | ||
100 | 0.003 | 0.004 | 0.206 | 0.954 | 0.217 | -0.016 | 1.837 | 0.940 | 0.240 | 0.017 | 1.954 | 0.960 | |||
150 | 0.002 | -0.002 | 0.167 | 0.951 | 0.153 | 0.010 | 1.519 | 0.943 | 0.170 | 0.027 | 1.596 | 0.949 | |||
0.75 | 50 | 0.007 | 0.012 | 0.298 | 0.936 | 2.089 | -0.254 | 5.308 | 0.944 | 0.974 | 0.242 | 3.753 | 0.940 | ||
100 | 0.003 | 0.005 | 0.206 | 0.952 | 0.650 | -0.064 | 3.424 | 0.956 | 0.346 | 0.061 | 2.360 | 0.936 | |||
150 | 0.002 | -0.003 | 0.166 | 0.936 | 0.561 | 0.012 | 2.785 | 0.928 | 0.237 | 0.031 | 1.899 | 0.936 | |||
0.9 | 50 | 0.007 | 0.021 | 0.309 | 0.928 | 4.413 | 0.488 | 3.186 | 0.898 | 1.563 | -0.764 | 1.728 | 0.902 | ||
100 | 0.003 | 0.008 | 0.214 | 0.980 | 1.707 | 0.817 | 2.539 | 0.902 | 0.986 | -0.893 | 1.068 | 0.880 | |||
150 | 0.002 | 0.007 | 0.173 | 0.960 | 1.518 | 0.873 | 1.606 | 0.926 | 0.989 | -0.925 | 0.630 | 0.928 |
Model | Estimates (Standard Errors) | K-S (p-value) | AIC | BIC | -Loglik | W^\star | A^\star | |
\beta | \theta | |||||||
ULE | 3.2454 | 1.77959 | 0.0500323 | -383.997 | -377.044 | -193.998 | 0.0675539 | 0.48204 |
(0.142222) | (0.100898) | (0.587827) | ||||||
Beta | 2.28593 | 8.66714 | 0.0650045 | -379.734 | -372.782 | -191.867 | 0.14045 | 0.87598 |
(0.458761) | (0.332228) | (0.264737) | ||||||
Kumaraswamy | 2.43553 | 6.69423 | 0.0722763 | -377.528 | -370.575 | -190.764 | 0.191987 | 1.14739 |
(0.458761) | (0.332228) | (0.16457) | ||||||
Unit-Omega | 3.6080 | 7.7337 | 0.05331 | -383.955 | -377.002 | -193.977 | 0.0820 | 0.5276 |
(0.3322) | (0.4588) | (0.5055) | ||||||
Unit- BurrXII | 1.73211 | 10.076 | 0.0522282 | -383.005 | -376.052 | -193.503 | 0.0888576 | 0.582411 |
(0.458761) | (0.332228) | (0.532105) |
Parameter | MLE | MPSE | BSE | LSE | |||||||
MLE | ACI_{ML} | MPSE | ACI_{MPS} | BSE | CrI | LSE | BCI | ||||
\theta | 1.77959 | (1.5818, 1.9774) | 1.8044 | (1.6114, 2.0062) | 1.7469 | (1.5527, 1.9412) | 1.8551 | (1.6090, 2.0701) | |||
\beta | 3.2454 | (2.9667, 3.5241) | 3.2254 | (2.9628, 3.5129) | 3.2453 | (2.9631, 3.5275) | 3.1989 | (2.9563, 3.4995) |
Parameters | Beta | Kumaraswamy | ULE | ||||||||
Estimate | SE | p-value | Estimate | SE | p-value | Estimate | SE | p-value | |||
\delta_0 | 0.9990 | 0.1291 | < 0.0000 | 1.1997 | 0.1397 | < 0.0000 | 1.04811 | 0.135555 | < 0.0000 | ||
\delta_1 | 0.0659 | 0.0939 | 0.483 | 0.0418 | 0.0955 | 0.6619 | 0.100013 | 0.07255 | 0.169365 | ||
\delta_2 | 0.2116 | 0.1038 | 0.0425 | 0.1833 | 0.1150 | 0.1123 | 0.21845 | 0.104438 | .037553 | ||
\delta_3 | 0.0142 | 0.0054 | 0.0088 | 0.0107 | 0.0059 | 0.0692 | 0.01709 | 0.00547946 | 0.002043 | ||
\alpha | 11.3447 | 1.0181 | < 0.0000 | 6.7274 | 0.4543 | < 0.0000 | 1.82216 | 0.10317 | < 0.0000 | ||
AIC | -381.79 | -375.66 | -388.659 | ||||||||
BIC | -364.41 | -358.28 | -371.276 |
Model | Estimates (Standard Errors) | K-S (p-value) | AIC | BIC | -Loglik | Cramér-von Mises | Anderson-Darling | |
\beta | \theta | |||||||
ULE | 0.257 | 2.582 | 0.068 | -170.611 | -166.03 | -87.306 | 0.096 | 0.874 |
(0.142) | (0.101) | (0.893) | ||||||
Beta | 3.798 | 0.613 | 0.181 | -148.235 | -143.654 | -76.118 | 0.697 | 3.961 |
(0.459) | (0.332) | (0.017) | ||||||
Kumaraswamy | 3.441 | 0.665 | 0.154 | -153.308 | -148.727 | -78.654 | 0.502 | 3.097 |
(0.459) | (0.332) | (0.064) | ||||||
Unit Omega | 5.377 | 0.787 | 0.128 | -163.383 | -158.802 | -83.691 | 0.336 | 2.153 |
(0.928) | (0.073) | (0.180) | ||||||
Unit-BurrXII | 0.348 | 2.841 | 0.338 | -89.013 | -84.432 | -46.507 | 2.662 | 12.874 |
(0.0625) | (0.421) | (0.0) |
Parameter | MLE | MPSE | BSE | LSE | |||||||
MLE | ACI_{ML} | MPSE | ACI_{MPS} | BSE | CrI | LSE | BCI | ||||
\theta | 2.5823 | (2.3845, 2.7800) | 2.4194 | (1.9312, 2.9077) | 2.5500 | (2.0647, 3.0615) | 3.0267 | (2.4038, 3.6891) | |||
\beta | 0.2572 | (-0.0215, 0.5359 | 0.2579 | (0.2290, 0.2868) | 0.2581 | (0.2314, 0.2873) | 0.2536 | (0.2303, 0.279) |
Parameters | Beta | Kumaraswamy | ULE | ||||||||
Estimate | SE | p-value | Estimate | SE | p-value | Estimate | SE | p-value | |||
\delta_0 | 1.888 | 1.172 | 0.112 | 2.539 | 1.550 | 0.106 | 3.823 | 1.463 | 0.011 | ||
\delta_1 | -0.001 | 0.014 | 0.932 | -0.036 | 0.018 | 0.043 | -0.014 | 0.012 | 0.232 | ||
\delta_2 | 0.178 | 0.232 | 0.445 | 0.596 | 0.390 | 0.131 | 0.142 | 0.250 | 0.573 | ||
\delta_3 | -0.512 | 0.123 | < 0.000 | -0.798 | 0.161 | < 0.000 | -0.872 | 0.149 | < 0.000 | ||
\delta_4 | 1.236 | 0.459 | 0.009 | 5.257 | 1.436 | 0.0005 | 1.799 | 0.444 | 0.0001 | ||
\delta_5 | -0.012 | 0.088 | 0.890 | -0.028 | 0.120 | 0.817 | -0.070 | 0.101 | 0.493 | ||
\delta_6 | -0.004 | 0.021 | 0.855 | -0.027 | 0.032 | 0.396 | 0.006 | 0.024 | 0.808 | ||
\alpha | 6.331 | 1.123 | < 0.000 | 0.978 | 0.106 | < 0.000 | 3.503 | 0.357 | < 0.000 | ||
AIC | -159.446 | -181.653 | -200.186 | ||||||||
BIC | -141.122 | -163.33 | -181.863 |