Processing math: 100%
Research article

A regularity criterion of smooth solution for the 3D viscous Hall-MHD equations

  • Received: 20 August 2018 Accepted: 20 October 2018 Published: 28 November 2018
  • In this work, we investigate the regularity criterion for the solution of the Hall-MHD system in three-dimensions. It is proved that if the pressure π and the gradient of magnetic field B satisfies some kind of space-time integrable condition on [0,T], then the corresponding solution keeps smoothness up to time T. This result improves some previous works to the Morrey space M2,3r for 0r<1 which is larger than L3r.

    Citation: A. M. Alghamdi, S. Gala, M. A. Ragusa. A regularity criterion of smooth solution for the 3D viscous Hall-MHD equations[J]. AIMS Mathematics, 2018, 3(4): 565-574. doi: 10.3934/Math.2018.4.565

    Related Papers:

    [1] M. Nagy, Adel Fahad Alrasheedi . The lifetime analysis of the Weibull model based on Generalized Type-I progressive hybrid censoring schemes. Mathematical Biosciences and Engineering, 2022, 19(3): 2330-2354. doi: 10.3934/mbe.2022108
    [2] Wael S. Abu El Azm, Ramy Aldallal, Hassan M. Aljohani, Said G. Nassr . Estimations of competing lifetime data from inverse Weibull distribution under adaptive progressively hybrid censored. Mathematical Biosciences and Engineering, 2022, 19(6): 6252-6275. doi: 10.3934/mbe.2022292
    [3] Walid Emam, Ghadah Alomani . Predictive modeling of reliability engineering data using a new version of the flexible Weibull model. Mathematical Biosciences and Engineering, 2023, 20(6): 9948-9964. doi: 10.3934/mbe.2023436
    [4] Manal M. Yousef, Rehab Alsultan, Said G. Nassr . Parametric inference on partially accelerated life testing for the inverted Kumaraswamy distribution based on Type-II progressive censoring data. Mathematical Biosciences and Engineering, 2023, 20(2): 1674-1694. doi: 10.3934/mbe.2023076
    [5] M. Nagy, M. H. Abu-Moussa, Adel Fahad Alrasheedi, A. Rabie . Expected Bayesian estimation for exponential model based on simple step stress with Type-I hybrid censored data. Mathematical Biosciences and Engineering, 2022, 19(10): 9773-9791. doi: 10.3934/mbe.2022455
    [6] Hatim Solayman Migdadi, Nesreen M. Al-Olaimat, Omar Meqdadi . Inference and optimal design for the k-level step-stress accelerated life test based on progressive Type-I interval censored power Rayleigh data. Mathematical Biosciences and Engineering, 2023, 20(12): 21407-21431. doi: 10.3934/mbe.2023947
    [7] Amal S. Hassan, Najwan Alsadat, Christophe Chesneau, Ahmed W. Shawki . A novel weighted family of probability distributions with applications to world natural gas, oil, and gold reserves. Mathematical Biosciences and Engineering, 2023, 20(11): 19871-19911. doi: 10.3934/mbe.2023880
    [8] Kimberlyn Roosa, Ruiyan Luo, Gerardo Chowell . Comparative assessment of parameter estimation methods in the presence of overdispersion: a simulation study. Mathematical Biosciences and Engineering, 2019, 16(5): 4299-4313. doi: 10.3934/mbe.2019214
    [9] M. G. M. Ghazal, H. M. M. Radwan . A reduced distribution of the modified Weibull distribution and its applications to medical and engineering data. Mathematical Biosciences and Engineering, 2022, 19(12): 13193-13213. doi: 10.3934/mbe.2022617
    [10] Said G. Nassr, Amal S. Hassan, Rehab Alsultan, Ahmed R. El-Saeed . Acceptance sampling plans for the three-parameter inverted Topp–Leone model. Mathematical Biosciences and Engineering, 2022, 19(12): 13628-13659. doi: 10.3934/mbe.2022636
  • In this work, we investigate the regularity criterion for the solution of the Hall-MHD system in three-dimensions. It is proved that if the pressure π and the gradient of magnetic field B satisfies some kind of space-time integrable condition on [0,T], then the corresponding solution keeps smoothness up to time T. This result improves some previous works to the Morrey space M2,3r for 0r<1 which is larger than L3r.


    Type-Ⅰ and Type-Ⅱ censoring schemes are the two most popular censoring schemes which are used in practice. The mixture of Type-Ⅰ and Type-Ⅱ censoring schemes has been discussed in the literature for this purpose, is known as the hybrid censoring scheme which was first introduced by Epstein [3]. The hybrid censoring scheme becomes quite popular in the reliability and life-testing experiments, see for example, Fairbanks et al. [4], Draper and Guttman [5], Chen and Bhattacharya [6], Jeong et al. [7], Childs et al. [8] and Gupta and Kundu [9]. Balakrishnan and Kundu [10] have extensively reviewed and discussed Type-Ⅰ and Type-Ⅱ hybrid censoring schemes and associated inferential issues. They have presented details on developments of generalized hybrid censoring and unified hybrid censoring schemes that have been introduced in the literature and they have presented several examples to illustrate the described results. From now on, we refer to this hybrid censoring scheme as Type-Ⅰ hybrid censoring scheme (Type-Ⅰ HCS). It is evident that the complete sample situation as well as Type-Ⅰ and Type-Ⅱ right censoring schemes are all special cases of this Type-Ⅰ HCS.

    Recently, Tripathic and Lodhi [11] have discussed inferential procedures for Weibull competing risks model with partially observed failure causes under generalized progressive hybrid censoring. Jeon and Kang [12] have estimated the half-logistic distribution based on multiply Type-Ⅱ hybrid censoring. Nassar and Dobbah [13] have analyzed the reliability characteristics of bathtub-shaped distribution under adaptive Type-Ⅰ progressive hybrid censoring. Algarni, Almarashi and Abd-Elmougoud [14] have considered the joint Type-Ⅰ generalized hybrid censoring for estimation two Weibull distributions.

    A three parameter Dagum distribution was proposed by Dagum [15,16], which plays are important role in size distribution of personal income. This distribution offers a more flexible for modeling lifetime data, such as in reliability. The Dagum distribution is not much popular, perhaps, because of its difficult mathematical procedures. In the 1970s, Camilo Dagum embarked on a quest for a statistical distribution closely fitting empirical income and wealth distributions. Not satisfied with the classical distributions, he looked for a model accommodating the heavy tails present in empirical income and wealth distributions as well as permitting an interior mode. He end up with Dagum Type Ⅰ distribution, a three-parameter distribution, and two four parameter generalizations see Dagum [16,17,18]. Dagum distribution is also called the inverse Burr, especially in the actuarial literature, as it is the reciprocal transformation of the Burr XII. Nevertheless, unlike the Burr XII, which is widely known in various fields of science. Since Dagum proposed his model as income distribution, its properties have been appreciated in economics and financial fields and its features have been extensively discussed in the studies of income and wealth. Kleiber and Kotz [19] and Kleiber [20] provided an exhaustive review on the origin of the Dagum model and its applications. Contributions from Quintano and D'Agostino [21] adjusted Dagum model for income distribution to account for individual characteristics, while Domma et al. [22,23] studied the Fisher information matrix in doubly censored data from the Dagum distribution and reliability studies of the Dagum distribution. An important characteristic of Dagum distribution is that its hazard function can be monotonically decreasing, an upside-down bathtub, or bathtub and then upside-down bathtub shaped, see Domma [24]. This behavior has led several authors to study the model in different fields. In fact, Dagum distribution has been studied from a reliability point of view and used to analyze survival data, see Domma, et al.[23].

    Dagum distribution specified by the probability density function (pdf)

    f(x;λ,β,θ)=λβθx(β+1)(1+λxβ)(θ+1),x>0;λ,β,θ>0, (1.1)

    and cumulative distribution function (cdf)

    F(x;λ,β,θ)=(1+λxβ)θ,x>0;λ,β,θ>0, (1.2)

    where λ is the scale parameter and β,θ are the shape parameters.

    Huang and Yang [1] have considered a combined hybrid censoring sampling (CHCS) scheme which define as follows: For fixed m, r {1,2,,n}, (T1, T2) (0,) such that m<r, T1<T2 and T denote the terminating time of the experiment. If the kth failure occurs before time T1, the experiment terminates at min{Xr:n,T1}, if the mth failure occurs between T1 and T2, the experiment is terminated at Xm:n and finally if the mth failure occurs after time T2, then the experiment terminates at T2. For our later convenience, we abbreviate this scheme as combined CHCS(m,r;T1,T2). In fact, this system contains the following six cases, and obviously, in each case some part of data are unobservable as:

    T={Xm:n,0<T1<Xm:n<(T2<Xr:n),Xm:n,0<T1<Xm:n<(Xr:n<T2),T2,0<T1<T2<(Xm:n<Xr:n),Xr:n,0<Xm:n<Xr:n<(T1<T2),T1,0<Xm:n<T1<(Xr:n<T2),T1,0<Xm:n<T1<(T2<Xr:n), (1.3)

    where the data in parentheses are unobservable.

    Balakrishnan, et al.[2] have proposed an unified hybrid censoring sampling (UHCS) scheme as follows: For fixed m, r {1,2,,n}, (T1, T2) (0,), m<r, T1<T2 and T denote the terminating time of the experiment. If the kth failure occurs before time T1, the experiment terminates at min{max{Xr:n,T1},T2}, if the mth failure occurs between T1 and T2, the experiment is terminated at min{Xr:n,T2} and finally if the mth failure occurs after time T2, then the experiment terminates at Xm:n. Again, for our later convenience, we abbreviate this scheme as UHCS(m,r;T1,T2). Similarly, each type of these hybrid censored samples contains six cases, and obviously, in each case some part of data are unobservable as following:

    T={T2,0<T1<Xm:n<T2<(Xr:n),Xr:n,0<T1<Xm:n<Xr:n<(T2),Xm:n,0<T1<T2<Xm:n<(Xr:n),T1,0<Xm:n<Xr:n<T1<(T2),Xr:n,0<Xm:n<T1<Xr:n<(T2),T2,0<Xm:n<T1<T2<(Xr:n), (1.4)

    where the data in parentheses are unobservable.

    In this paper, we merge CHCS(m,r;T1,T2) and UHCS(m,r;T1,T2), in a unified approach known as a combined-unified hybrid censored scheme (C-UHCS(m,r;T1,T2)). To the best of our knowledge, no attempt has been made on estimation of the parameters of the Dagum distribution using CHCS(m,r;T1,T2) or UHCS(m,r;T1,T2), so, we apply C-UHCS(m,r;T1,T2) to Dagum distribution. We first obtain the maximum likelihood estimate of the parameters and use them to construct asymptotic and bootstrap confidence intervals (CIs). Next, we obtain the Bayes estimates of λ,β and θ. The layout of this paper as follows. In Section 2, we first describe the construction of likelihood function based C-UHCS(m,r;T1,T2) and obtain the MLEs of λ,β and θ. The asymptotic and bootstrap confidence intervals based on the observed Fisher information matrix is also discussed here. Next, Section 3, we consider Bayesian estimation of the unknown parameters under squared error and LINEX loss functions. Simulation studies are carried out in Section 4 to assess the performance of the proposed methods. Section 5 contains a brief conclusion.

    Let X1,X2,...,Xn denote a sequence of the lifetimes of reliability experiment units that placed on a life-test, we shall assume that these variables of this sample is iid from an absolutely continuous population with cumulative distribution function (cdf) F(x) and probability density function (pdf) f(x). In this section, we construct the likelihood function under the censoring scheme C-UHCS(m,r;T1,T2). Let Dj denote the maximum number of failures until Tj,j=1,2, obviously have D1D2. Then, the likelihood function of CHCS(m,r;T1,T2), for a parameter space Ω, is given as

    L(C)(Ω|x)={n!(nm)![1F(xm)]nmmi=1f(xi);D1=0,,m1,D2=m,n!(nD2)![1F(T2)]nD2D2i=1f(xi);D1,D2=0,,m1,n!(nr)![1F(xr)]nrri=1f(xi);D1=D2=r,n!(nD1)![1F(T1)]nD1D1i=1f(xi);D1=D2=m,,r1. (2.1)

    Similarly, the observed likelihood function based on UHCS(m,r;T1,T2) is given as

    L(U)(Ω|x)={n!(nD)![1F(T1)]nDmi=1f(xi);D1=D2=D=r,,n,n!(nr)![1F(xr)]nrri=1f(xi);D1=m,,r1,D2=r,n!(nD2)![1F(T2)]nD2D2i=1f(xi);D1,D2=m,,r1,n!(nr)![1F(xr)]nrri=1f(xi);D1=0,,m1,D2=r,n!(nD2)![1F(T2)]nD2D2i=1f(xi);D1=0..m1,D2=m...r1,n!(nm)![1F(xm)]nmmi=1f(xi);D1,D2=0,,m1. (2.2)

    Assume that, for any case, we terminate the experiment at T that may refer to time T1, T2, observation xk or observation xr, and let k denote the maximum number of failures until T which equal, respectively, D1,D2,k and r. The likelihood function of C-UHCS(k,r;T1,T2), that represents all previous likelihood functions L(C)(Ω|x) and L(U)(Ω|x) under different values of k, T and xk=(x1,x2,...,xk) can be written as

    L(Ω|xk)=n!(nk)![1F(T)]nkki=1f(xi), (2.3)

    where k and T can be chosen as:

    Cases L(C)(Ω|x) L(U)(Ω|x)
    k T k T
    1 :  0<T1<Xk:n<T2<Xr:n m Xm:n D2 T2
    2 :  0<T1<Xk:n<Xr:n<T2 m Xm:n r Xr:n
    3 :  0<T1<T2<Xk:n<Xr:n D2 T2 m Xm:n
    4 :  0<Xk:n<Xr:n<T1<T2 r Xr:n D1 T1
    5 :  0<Xk:n<T1<Xr:n<T2 D1 T1 r Xr:n
    6 :  0<Xk:n<T1<T2<Xr:n D1 T1 D2 T2

     | Show Table
    DownLoad: CSV

    Suppose that {x1,x2,...,xk} be a sequence of observed data from Dagum distribution. Substituting (1.1) and (1.2) in (2.3), the observed likelihood function of λ,β and θ based on these C-UHCS(k,r;T1,T2) becomes

    L(λ,β,θ|xk)=n!(nk)!λkβkθk[1(1+λTβ)θ]nkki=1x(β+1)i(1+λxβi)(θ+1), (3.1)

    and the corresponding log-likelihood function (Ł) is

    Ł=logL(λ,β,θ|xk)=logn!(nk)!+klogλ+klogβ+klogθ+(nk)log[1(1+λTβ)θ]ki=1[(β+1)logxi+(θ+1)log(1+λxβi)]. (3.2)

    Taking the first partial derivatives of log-likelihood (3.2) with respect to λ,β,θ and equating each to zero. Let Ma=(1+λTβ)(θ+a), we obtain

    Łλ=kλ+(nk)θTβM11M0ki=1(θ+1)xβi1+λxβi=0, (3.3)
    Łβ=kβ(nk)θλTβM1logT1M0ki=1[logxi(θ+1)λxβilogxi1+λxβi]=0, (3.4)
    Łθ=kθ+(nk)M0log(1+λTβ)1M0ki=1log(1+λxβi)=0, (3.5)

    The solutions of the above nonlinear equations are the maximum likelihood estimators of the Dagum distribution parameters λ,β and θ. As the equations expressed in (3.3), (3.4) and (3.5) cannot be solved analytically, one must use a numerical procedure to solve them.

    Then, we can use the asymptotic normality of the MLEs to compute the asymptotic confidence intervals of the parameters λ,β and θ. The observed variance-covariance matrix for the MLEs of the parameters ˆV=[σi,j],i,j=1,2,3, was taken as

    ˆV=[2Łλ22Łλβ2Łλθ2Łβλ2Łβ22Łβθ2Łθλ2Łθβ2Łθ2]1(λ=ˆλML,β=ˆβML,θ=ˆθML), (3.6)

    where

    2Łλ2=kλ2+(nk)[θ2M21θ(θ+1)(1M0)M2]T2β(1M0)2+ki=1(θ+1)x2βi(1+λxβi)2, (3.7)
    2Łβλ=(nk)θlogTTβ(1M0)2{[M1(θ+1)λM2Tβ](1M0)+θλM21Tβ}ki=1(θ+1)xβilogxi(1+λxβi)2, (3.8)
    2Łθλ=(nk)M1(1M0θlog(1+λTβ))Tβ(1M0)2ki=1xβi1+λxβi, (3.9)
    2Łβ2=kβ2ki=1(θ+1)λxβilog2xi(1+λxβi)2+(nk)log2TTβ(1M0)2{θλ(M1(θ+1)M2λTβ)(1M0)+Tβ(θλM1)2}, (3.10)
    2Łθβ=(nk)λTβM1logT(θlog(1+λTβ)+M01)(1M0)2ki=1λxβilogxi1+λxβi, (3.11)
    2Łθ2=kθ2(nk)M0log2(1+λTβ)(1M0)2. (3.12)

    A 100(1α)% two-sided approximate confidence intervals for the parameters λ, β and θ are then given by

    ˆλ±zα/2V(ˆλ), (3.13)
    ˆβ±zα/2V(ˆβ), (3.14)

    and

    ˆθ±zα/2V(ˆθ), (3.15)

    respectively, where V(ˆλ), V(ˆβ), and V(ˆθ) are the estimated variances of ˆλML, ˆβML and ˆθML, which are given by the diagonal elements of ˆV, and zα/2 is the upper (α2) percentile of standard normal distribution.

    In order to construct the bootstrap confidence intervals Boot-p for the unknown parameters ϕ=(λ,β, θ) based on C-UHCS scheme, we apply the following algorithms [For more details, one may refer to Kundu and Joarder [25] and Dube, Garg and Krishna [26]].

    Boot-p interval's Algorithm:

    step-1: Simulate x1:n,x2:n,...,xk:n from Dagum distribution given in (1.1) and derive an estimate ˆϕ of ϕ.

    step-2: Simulate another sample x1:n,x2:n,...,xk:n using ˆϕ, k and T. Then derive the updated bootstrap estimate ˆϕ of ϕ.

    step-3: Repeat the previous step, a prescribed B number of replications.

    step-4: With ˆF(x)=P(ˆϕx) denoting the distribution function of ˆϕ, the 100(1α)% confidence interval of ϕ is given by

    (ˆϕBootp(α2),ˆϕBootp(1α2)),

    where ˆϕBootp(x)=ˆF1(x) and x is prefixed.

    Bayesian inference is a convenient method to be used with C-UHCS(k,r;T1,T2). Indeed, given that C-UHCS(k,r;T1,T2) are so scarce, prior information is welcome. Risk functions are chosen depending on how one measures the distance between the estimate and the unknown parameter. In order to conduct the Bayesian analysis, usually quadratic loss function is considered. A very popular quadratic loss is the squared error (SE) loss function given by

    LS(g(φ),ˆg(φ))=(g(φ)ˆg(φ))2 (4.1)

    where ˆg(φ) is an estimate of the parametric function g(φ). The Bayes estimate of g(φ), say ˆgST(φ), against SE loss function is the posterior mean given by

    ˆgS(φ)=Eφ[g(φ)|xk]. (4.2)

    Using SE loss function in the Bayesian approach leads to the equal penalization for underestimation and overestimation which is inappropriate in practical purposes. For instance, in estimating the reliability characteristics, overestimation is more serious than the underestimation. Therefore, different asymmetric loss functions are considered by researchers such as LINEX loss function. The LINEX loss function given by

    LL(g(φ),ˆg(φ))=exp[ρ(g(φ)ˆg(φ))]ρ(g(φ)ˆg(φ))1,ρ0 (4.3)

    is a popular asymmetric loss function that penalizes underestimation and overestimation for negative and positive values of ν, respectively. For ρ close to zero, the LINEX loss is approximately equal to the SE loss and therefore almost symmetric. The Bayes estimate of g(φ) under LE loss function becomes

    ˆgL(φ)=1ρlog(Eφ[exp(ρg(φ))|xk]), (4.4)

    Here, we derive different Bayes estimates by using the mentioned loss functions. Under the assumption that the parameters λ, β and θ are unknown and independent, we assume the joint prior density function, suggested by Al-Hussaini et al. [27] which gave good results, that is given by

    π(λ,β,θ)=ν1ν2ν3exp[(ν1λ+ν2β+ν3θ)],λ,β,θ>0, (4.5)

    where ν1,ν2 and ν3 are positive constants.

    In order to use Tierney-Kadane's approximation technique, we set

    ϕ(λ,β,θ)=1nlog[L(λ,β,θ|xk)π(λ,β,θ)] and ϕ(g)(λ,β,θ)=ϕ(λ,β,θ)+1nlogg(λ,β,θ). (4.6)

    Now, assuming squared error loss functions, Bayes estimate of the function of parameters g(λ,β,θ) can be written in terms of (4.6) as

    ˆgST(λ,β,θ)000g(λ,β,θ)L(λ,β,θ|xk)π(λ,β,θ)λβθ=000exp(nϕ(g)(λ,β,θ))λβθ000exp(nϕ(λ,β,θ))λβθ. (4.7)

    By using Tierney and Kadane [28], the approximate form of (4.7) becomes

    ˆgST(λ,β,θ)=[detH(g)detH]12exp(n[ϕ(g)(¯λ(g),¯β(g),¯θ(g))ϕ(¯λ,¯β,¯θ)]), (4.8)

    where (¯λ(g),¯β(g),¯θ(g)) and (¯λ,¯β,¯θ) maximize ϕ(g)(λ,β,θ) and ϕ(λ,β,θ), respectively, and H(g) and H are minus the inverse Hessians Matrix of ϕ(g)(λ,β,θ) and ϕ(λ,β,θ) at (¯λ(g),¯β(g),¯θ(g)) and (¯λ,¯β,¯θ), respectively. Here, from (3.2), (4.5) and (4.6), we have

    ϕ(λ,β,θ)=1n{logν1ν2ν3ν1λν2βν3θ+Ł}. (4.9)

    Now, (¯λ,¯β,¯θ) can be calculated from the simultaneous solution of the nonlinear equations

    λŁ(λ,β,θ)=ν1,βŁ(λ,β,θ)=ν2 and θŁ(λ,β,θ)=ν3. (4.10)

    The second order derivatives of Ł, given in (3.7)-(3.12) can be used to determine the determinant of the negative of the inverse Hessian matrix of ϕ(λ,β,θ) at (¯λ,¯β,¯θ) as

    detH=1n3det[2Łλ22Łλβ2Łλθ2Łβλ2Łβ22Łβθ2Łθλ2Łθβ2Łθ2]1(λ=¯λ,β=¯β,θ=¯θ) (4.11)

    Then, the Bayesian estimate of λ,β and θ based on square error loss function can be obtained by replacing g(λ,β,θ) by λ,β and θ, respectively and corresponding ϕ(g)ST(λ,β,θ) take the forms:

    ϕ(g)ST(λ,β,θ)={ϕ(λ)ST(λ,β,θ)=ϕ(λ,β,θ)+1nlogλ,g(λ,β,θ)=λ,ϕ(β)ST(λ,β,θ)=ϕ(λ,β,θ)+1nlogβ,g(λ,β,θ)=β,ϕ(θ)ST(λ,β,θ)=ϕ(λ,β,θ)+1nlogθ,g(λ,β,θ)=θ. (4.12)

    Hence, (¯λ(λ)ST,¯β(λ)ST,¯θ(λ)ST), (¯λ(β)ST,¯β(β)ST,¯θ(β)ST) and (¯λ(θ)ST,¯β(θ)ST,¯θ(θ)ST) can be computed by maximizing ϕ(λ)ST(λ,β,θ), ϕ(β)ST(λ,β,θ) and ϕ(θ)ST(λ,β,θ), respectively, through the simultaneous solution of the each of the following systems:

    System 1: λŁ(λ,β,θ)ν1+1nλ=0,βŁ(λ,β,θ)ν2=0 and θŁ(λ,β,θ)ν3=0,

    System 2: λŁ(λ,β,θ)ν1=0,βŁ(λ,β,θ)ν2+1nβ=0 and θŁ(λ,β,θ)ν3=0,

    System 3: λŁ(λ,β,θ)ν1=0,βŁ(λ,β,θ)ν2=0 and θŁ(λ,β,θ)ν3+1nθ=0.

    Again, using the second order derivative of ϕ(λ)ST(λ,β,θ), ϕ(β)ST(λ,β,θ) and ϕ(θ)ST(λ,β,θ) at (¯λ(λ)ST,¯β(λ)ST,¯θ(λ)ST), (¯λ(β)ST,¯β(β)ST,¯θ(β)ST) can be used to calculate (¯λ(θ)ST,¯β(θ)ST,¯θ(θ)ST), the elements of H(λ)ST, H(β)ST and H(θ)ST, respectively, as:

    detH(λ)ST=1n3det[2Łλ21λ22Łλβ2Łλθ2Łβλ2Łβ22Łβθ2Łθλ2Łθβ2Łθ2]1(λ=¯λ(λ)ST,β=¯β(λ)ST,θ=¯θ(λ)ST), (4.13)
    detH(β)ST=1n3det[2Łλ22Łλβ2Łλθ2Łβλ2Łβ21β22Łβθ2Łθλ2Łθβ2Łθ2]1(λ=¯λ(β)ST,β=¯β(β)ST,θ=¯θ(β)ST), (4.14)

    and

    detH(θ)ST=1n3det[2Łλ22Łλβ2Łλθ2Łβλ2Łβ22Łβθ2Łθλ2Łθβ2Łθ21θ2]1(λ=¯λ(θ)ST,β=¯β(θ)ST,θ=¯θ(θ)ST). (4.15)

    Therefore, the approximate Bayes estimate of λ based on square error loss function are:

    ˆλST=[detH(λ)STdetH]12exp(n[ϕ(λ)ST(¯λ(λ)ST,¯β(λ)ST,¯θ(λ)ST)ϕ(¯λ,¯β,¯θ)]),ˆβST=[detH(β)STdetH]12exp(n[ϕ(β)ST(¯λ(β)ST,¯β(β)ST,¯θ(β)ST)ϕ(¯λ,¯β,¯θ)]),ˆθST=[detH(θ)STdetH]12exp(n[ϕ(θ)ST(¯λ(θ)ST,¯β(θ)ST,¯θ(θ)ST)ϕ(¯λ,¯β,¯θ)]).} (4.16)

    Next, in order to obtain the Bayesian estimates of λ,β and θ based on LINEX loss function we replacing g(λ,β,θ) by eρλ,eρβ and eρθ, respectively, and corresponding ϕ(g)LT(λ,β,θ) take the forms:

    ϕ(g)LT(λ,β,θ)={ϕ(λ)LT(λ,β,θ)=ϕ(λ,β,θ)ρλn,g(λ,β,θ)=eρλ,ϕ(β)LT(λ,β,θ)=ϕ(λ,β,θ)ρβn,g(λ,β,θ)=eρβ,ϕ(θ)LT(λ,β,θ)=ϕ(λ,β,θ)ρθn,g(λ,β,θ)=eρθ. (4.17)

    Hence, (¯λ(λ)LT,¯β(λ)LT,¯θ(λ)LT), (¯λ(β)LT,¯β(β)LT,¯θ(β)LT) and (¯λ(θ)LT,¯β(θ)LT,¯θ(θ)LT) can be computed by maximizing ϕ(λ)LT(λ,β,θ), ϕ(β)LT(λ,β,θ) and ϕ(θ)LT(λ,β,θ), respectively, through solving simultaneously the following systems:

    System 4: λŁ(λ,β,θ)ν1ρn=0,βŁ(λ,β,θ)ν2=0 and θŁ(λ,β,θ)ν3=0.

    System 5: λŁ(λ,β,θ)ν1=0,βŁ(λ,β,θ)ν2ρn=0 and θŁ(λ,β,θ)ν3=0,

    System 6: λŁ(λ,β,θ)ν1=0,βŁ(λ,β,θ)ν2=0 and θŁ(λ,β,θ)ν3ρn=0.

    Once again, we can derive that H(λ)LT=H(β)LT=H(θ)LT=HLT by calculating the second order derivative of ϕ(λ)LT(λ,β,θ), ϕ(β)LT(λ,β,θ) and ϕ(θ)LT(λ,β,θ) at (¯λ(λ)LT,¯β(λ)LT,¯θ(λ)LT), (¯λ(β)LT,¯β(β)LT,¯θ(β)LT) and (¯λ(θ)LT,¯β(θ)LT,¯θ(θ)LT) and by using the same manner as in (4.12)-(4.15). Therefore, the approximate Bayes estimate of λ based on LINEX loss function are:

    ˆλLT=1ρlog(n[ϕ(λ)LT(¯λ(λ)LT,¯β(λ)LT,¯θ(λ)LT)ϕ(¯λ,¯β,¯θ)]),ˆβLT=1ρlog(n[ϕ(β)LT(¯λ(β)LT,¯β(β)LT,¯θ(β)LT)ϕ(¯λ,¯β,¯θ)]),ˆθLT=1ρlog(n[ϕ(θ)LT(¯λ(θ)LT,¯β(θ)LT,¯θ(θ)LT)ϕ(¯λ,¯β,¯θ)]).} (4.18)

    In order to calculate 100(1α)% HPD credible intervals for the Bayesian estimates using both of SE and LE loss functions for any parameter, for example δ, we follow the steps below:

    HPD credible interval:

    1. Simulate censored sample of size n from Dagum distribution given in (1.1) and calculate the estimate of δ under a certain choice of k,r,T1 and T2, say δ.

    2. Repeat the previous step M times to get δ1,δ2,,δM, and the order values are: δ1:M,δ2:M,,δM:M

    3. 100(1α)% HPD credible interval for δ is shortest length through the intervals (δj:M,δj+(1α)M:M),j=1,2,,αM.

    In the previous subsection, we have used Tierney-Kadane's approximation to derive the Bayes estimates of the parameters. However, it is not possible to obtain HPD credible intervals using this method. In this subsection, we adopt a Metropolis-Hastings within Gibbs sampling approach to generate random samples from the conditional densities of the parameters and use them to obtain the HPD credible intervals and point Bayes estimates. From (3.1) and (4.5), the posterior density of λ,β, and θ can be extracted as

    π(λ,β,θ)λkβkθkexp[(ν1λ+ν2β+ν3θ)]×[1(1+λTβ)θ]nkki=1x(β+1)i(1+λxβi)(θ+1). (4.19)

    In the following algorithm, we employ Metropolis-Hastings (M-H) technique with normal proposal distribution to generate samples from these distributions.

    1. Start with initial values of the parameters (λ(0),β(0),θ(0)). Then, simulate censored sample of size k under a certain choice of m,r,T1 and T2 from Dagum distribution given in (1.1) and set l=1.

    2. Generate λ(),β(),θ() using the proposal distributions N(λ(l1),1),N(β(l1),1) and N(θ(l1),1), respectively.

    3. Calculate the acceptance probability r=min(1,π(λ(),β(),θ())π(λ(l1),β(l1),θ(l1))).

    4. Generate U from uniform(0, 1).

    5. Accept the proposal distribution and set (λ(l),β(l),θ(l))=(λ(),β(),θ()) if U<r. Otherwise, reject the proposal distribution and set (λ(l),β(l),θ(l))=(λ(l1),β(l1),θ(l1)).

    6. Set l=1+1.

    7. Repeat Steps 2-6, M times, and obtain λ(l),β(l) and θ(l) for l=1,...,M.

    By using the generated random samples from the above Gibbs sampling technique and for N is the nburn, the approximate Bayes estimate of the parameters under squared error and LINEX loss functions can be obtained as

    ˆλSM=1MNMl=N+1λ(l),ˆβSM=1MNMl=N+1β(l),ˆθSM=1MNMl=N+1θ(l),
    ˆλLM=1ρlog(Ml=N+1exp(ρλ(l))MN),ˆβLM=1ρlog(Ml=N+1exp(ρβ(l))MN),

    and

    ˆθLM=1ρlog(Ml=N+1exp(ρθ(l))MN).

    MCMC HPD credible interval Algorithm:

    1. Arrange the values of λ(),β() and θ() in increasing magnitude.

    2. Find the positions of the lower bounds which is (MN)α/2, then determine the lower bounds of λ,β and θ.

    3. Find the positions of the upper bounds which is (MN)(1α/2), then determine the upper bounds of λ,β and θ.

    4. Repeat the above steps M times. Find the average value of the lower and upper bounds MCMC HPD credible interval of λ,β and θ.

    5. Get n number of MCMC HPD credible intervals. Find the average value of the lower and upper bounds credible interval of λ,β and θ.

    In this section, we show the usefulness of the theatrical findings in this paper by conducting series of simulation experiments. The simulations show the bias and estimated risk of the maximum likelihood and Bayesian estimates, respectively. The Bayesian estimate are calculated based on mean squared and LINEX loss functions. In addition, 95% and 90% the confidence, Bootstrap and HPD credible intervals are calculated with the corresponding width. The simulation experiments can be explained though the following steps:

    Evaluate the performances of Bayes predictors obtained from LINEX and squared error loss functions. In this case, to investigate the sensitivity of the predictors with respect to the choices of hyper parameters, the above mentioned priors are considered. We perform simulations to investigate the behavior of the different methods for n=30 and for various r,k,T1,T2.

    1. Fix different censoring cases as given in (7) as Xr=10:n, Xr=15:n, Xk=20:n, Xk=25:n, T1=18, T1=24, T2=30 and T2=36, next generate the censored samples from Dagum distribution using λ=5,β=2 and θ=2.

    2. Use each of the censoring cases in step (1) for calculating the MLEs by solving the system of nonlinear equations (3.3), (3.4) and (3.5).

    3. Again, we use each of the censoring cases in step (1) for calculating the Bayesian estimates by using Tierney-Kadane's approximation in Section (4) for both cases of mean squared and LINEX loss functions. Let the hyper-parameters be the inverse of initial values which are ν1=0.2 and ν2=ν3=0.5, respectively. The parameter ρ in LINEX is chosen as -0.5, 1.0 and 1.5.

    4. The steeps (1)-(3) are repeated 1000 times, then the bias and estimated risk (ER) in each cases are calculated in Table 1. The ER of parameter φ under squared error and LINEX loss functions by:

    ERS=1RRi=1(ˆφiφ)2,ERL=1RRi=1(Exp(ρ(ˆφiφ))(ˆφiφ)1),
    Table 1.  Bias, MSE and estimated risk (ER) of the parameters estimation when n=30,λ=5,β=2 and θ=2.
    T λML λST λLT βML βST βLT θML θST θLT
    ρ=0.5 ρ=1.0 ρ=1.5 ρ=0.5 ρ=1.0 ρ=1.5 ρ=0.5 ρ=1.0 ρ=1.5
    Xr=10:n 0.2363 0.2324 0.2149 -0.0605 0.0144 -0.0284 0.0214 0.0264 0.0157 0.0085 0.1374 -0.4014 -0.2774 -0.3524 -0.2393
    8.0744 0.1993 0.1928 0.1553 0.1618 0.6231 0.2048 0.1938 0.0941 0.0976 0.4086 0.6174 0.2512 0.3189 0.2156
    Xr=15:n 0.0067 0.3116 0.2985 0.0925 0.1149 0.0702 0.0753 0.0834 0.078 0.0866 -0.1417 -0.1755 -0.1803 -0.2811 0.1547
    4.0237 0.1795 0.1768 0.1428 0.1529 0.3233 0.1876 0.1851 0.0802 0.0879 0.2084 0.3061 0.1431 0.2539 0.1394
    Xk=20:n 0.4301 0.3551 0.3714 0.2116 0.2363 0.1619 0.1027 0.1113 0.1141 0.1085 -0.106 -0.1258 -0.1694 -0.2164 0.1096
    2.359 0.1517 0.1435 0.1339 0.1421 0.179 0.1419 0.1471 0.0726 0.0676 0.1209 0.1637 0.1128 0.1953 0.0987
    Xk=25:n 0.1065 0.4041 0.3928 0.2801 0.3183 0.0599 0.131 0.134 0.1255 0.133 -0.0684 -0.0768 -0.1111 -0.1882 -0.0928
    1.5964 0.1407 0.1327 0.1216 0.1158 0.1059 0.1081 0.0555 0.0628 0.0646 0.0811 0.0993 0.1002 0.1698 0.0835
    T1=18 0.4675 0.3669 0.3542 0.2379 0.2477 0.1206 0.1256 0.1258 0.1235 0.1209 -0.1262 0.1234 -0.1131 -0.1814 0.1035
    1.9223 0.1461 0.1381 0.1237 0.1224 0.1505 0.1168 0.1011 0.0711 0.0668 0.0952 0.112 0.1021 0.1737 0.0932
    T1=24 -0.1587 0.2724 0.2641 0.06 0.1081 -0.0359 0.0564 0.0765 0.0665 0.0544 0.1528 -0.1616 -0.157 -0.3176 0.1755
    5.1027 0.1839 0.1871 0.1535 0.1565 0.3924 0.1959 0.1888 0.0898 0.0949 0.2527 0.3225 0.1621 0.287 0.1581
    T2=30 0.3561 0.4094 0.4098 0.3132 0.2822 0.0627 0.1302 0.1395 0.1346 0.1357 -0.0716 -0.0833 -0.1149 -0.1548 -0.0851
    1.4869 0.1339 0.1309 0.2813 0.1135 0.0952 0.0877 0.0455 0.0521 0.0532 0.0618 0.0777 0.0936 0.1396 0.0767
    T2=36 0.2507 0.3533 0.3359 0.1864 0.2058 0.1621 0.0937 0.1057 0.092 0.0951 -0.1454 0.1328 -0.1532 -0.2664 0.1301
    3.1372 0.1598 0.1516 0.1372 0.1446 0.2343 0.1507 0.1781 0.0727 0.0756 0.1525 0.2219 0.1385 0.2406 0.1172
    The first row represents the Bias while the second row represents the MSE and ER.

     | Show Table
    DownLoad: CSV

    where ˆφ is the estimate of φ and R is the number of replication.

    5. The 90% and 95% approximate confidence, bootstrap and HPD credible intervals with their width for the parameters λ,β and θ are calculated in Tables (2), (3) and (4), respectively.

    From Tables 1, 2, 3 and 4, we see that:

    Table 2.  Bias and the estimated risk (ER) of the parameters estimation using MCMC when n=30,λ=5,β=2 and θ=2.
    T λSE λLE βSE βLE θSE θLE
    ρ=0.5 ρ=1.0 ρ=1.5 ρ=0.5 ρ=1.0 ρ=1.5 ρ=0.5 ρ=1.0 ρ=1.5
    Xr=10:n -2.3890 -2.1640 -2.8200 -2.7040 -1.1830 -1.1730 -1.2130 -1.2030 -0.1960 -0.1470 -0.3210 -0.2830
    0.0276 0.0373 0.0188 0.0181 0.0637 0.0653 0.0564 0.0593 0.0081 0.0089 0.0062 0.0064
    Xr=15:n -2.7440 -2.6220 -3.0110 -2.9340 -1.1810 -1.1740 -1.2030 -1.1960 -0.4460 -0.4160 -0.5270 -0.5020
    0.0222 0.0270 0.0151 0.0184 0.0382 0.0386 0.0336 0.0346 0.0063 0.0065 0.0047 0.0047
    Xk=20:n -2.9700 -2.8930 -3.1550 -3.1000 -1.0840 -1.0770 -1.1050 -1.0980 -0.6160 -0.5960 -0.6740 -0.6550
    0.0118 0.0091 0.0068 0.0065 0.0143 0.0147 0.0118 0.0129 0.0027 0.0018 0.0014 0.0025
    Xk=25:n -3.1700 -3.1150 -3.3110 -3.2680 -1.0820 -1.0770 -1.0990 -1.0940 -0.7550 -0.7380 -0.8000 -0.7850
    0.0083 0.0038 0.0021 0.0038 0.0073 0.0059 0.0066 0.0055 0.0003 0.0013 0.0011 0.0017
    T1=18 -2.6400 -2.4940 -2.9540 -2.8650 -1.2480 -1.2410 -1.2700 -1.2630 -0.3480 -0.3110 -0.4470 -0.4160
    0.0187 0.0200 0.0118 0.0113 0.0351 0.0354 0.0299 0.0311 0.0059 0.0060 0.0040 0.0048
    T1=24 -2.8350 -2.7320 -3.0700 -3.0010 -1.1650 -1.1590 -1.1860 -1.1790 -0.5250 -0.4990 -0.5960 -0.5740
    0.0092 0.0112 0.0065 0.0086 0.0254 0.0240 0.0213 0.0229 0.0034 0.0036 0.0031 0.0025
    T2=30 -3.1250 -3.0620 -3.2830 -3.2350 -1.1130 -1.1070 -1.1300 -1.1250 -0.7110 -0.6930 -0.7610 -0.7450
    0.0100 0.0083 0.0043 0.0068 0.0201 0.0208 0.0195 0.0184 0.0017 0.0018 0.0012 0.0014
    T2=36 -3.2540 -3.2040 -3.3820 -3.3430 -1.1130 -1.1080 -1.1280 -1.1230 -0.8060 -0.7920 -0.8470 -0.8340
    0.0096 0.0075 0.0087 0.0072 0.0061 0.0057 0.0064 0.0065 0.0020 0.0008 0.0013 0.0012
    The first row represents the Bias while the second row represents ER.

     | Show Table
    DownLoad: CSV
    Table 3.  95% and 90% Interval estimation of the parameter λ when n=30,λ=5,β=2 and θ=2.
    T ML Bootstrap HPDS HPDLT
    ρ=0.5 ρ=1.0 ρ=1.5
    Xr=10:n 95% 3.8809 7.1027 4.301 5.85 4.2645 5.691 4.703 5.6513 4.322 5.6302 4.2076 5.6923
    3.2218 1.549 1.4266 0.9483 1.3082 1.4846
    90% 3.8901 7.0931 4.3333 5.8146 4.2644 5.6723 4.703 5.5978 4.322 5.5623 4.2076 5.6161
    3.203 1.4813 1.4079 0.8949 1.2402 1.4085
    Xr=15:n 95% 4.2337 5.8233 4.6868 5.7555 4.6531 5.6376 4.9182 5.6107 4.6543 5.5599 4.6097 5.6464
    1.5896 1.0687 0.9845 0.6925 0.9057 1.0368
    90% 4.2337 5.8099 4.7149 5.7202 4.6431 5.6065 4.9254 5.5946 4.6543 5.5128 4.6097 5.6421
    1.5762 1.0052 0.9634 0.6692 0.8585 1.0324
    Xk=20:n 95% 4.6878 6.1134 4.8764 5.6987 4.8931 5.6201 4.9164 5.5526 4.8637 5.5532 4.8085 5.5939
    1.4256 0.8222 0.727 0.6362 0.6895 0.7854
    90% 4.6896 6.0885 4.898 5.6722 4.8931 5.6135 4.9204 5.5567 4.8637 5.5449 4.8085 5.5508
    1.3989 0.7743 0.7204 0.6363 0.6812 0.7423
    Xk=25:n 95% 4.9406 5.8616 4.9989 5.655 4.9981 5.5968 4.9585 5.5467 4.9862 5.5477 4.9615 5.5974
    0.921 0.6561 0.5987 0.5882 0.5614 0.6359
    90% 4.9501 5.8366 4.9989 5.6381 4.9984 5.5068 4.9985 5.5381 4.9862 5.5197 4.9615 5.5821
    0.8865 0.6392 0.5084 0.5396 0.5335 0.6206
    T1=18 95% 4.5417 5.8111 4.9665 5.676 4.9704 5.5971 4.9182 5.5517 4.9683 5.5676 4.909 5.6031
    1.2694 0.7095 0.6267 0.6335 0.5993 0.6942
    90% 4.5451 5.8043 4.9865 5.6605 4.9904 5.5962 4.9645 5.5421 4.9683 5.5277 4.909 5.5548
    1.2592 0.674 0.6058 0.5776 0.5594 0.6458
    T1=24 95% 4.4725 5.9568 4.3615 5.7797 4.3455 5.6537 4.885 5.6217 4.5357 5.5698 4.4527 5.6489
    1.4843 1.4181 1.3082 0.7367 1.0341 1.1962
    90% 4.4765 5.9381 4.5019 5.743 4.5445 5.6123 4.885 5.6061 4.5357 5.516 4.4527 5.5817
    1.4616 1.1411 1.0678 0.721 0.9803 1.129
    T2=30 95% 4.8469 5.8925 4.9994 5.651 4.9983 5.5882 4.9772 5.5389 4.9832 5.5436 4.9779 5.6068
    1.0456 0.6516 0.5899 0.5617 0.5604 0.6289
    90% 4.8547 5.8712 4.9991 5.6347 4.9993 5.4456 4.9992 5.5373 4.9972 5.5161 4.9712 5.5067
    1.0165 0.6356 0.4463 0.5381 0.5189 0.5355
    T2=36 95% 4.3165 6.296 4.7768 5.7342 4.6733 5.6214 4.9254 5.5959 4.7816 5.575 4.7175 5.6404
    1.9795 0.9574 0.9481 0.6705 0.7934 0.923
    90% 4.3255 6.2768 4.8078 5.7113 4.6893 5.5926 4.9645 5.6077 4.7816 5.5594 4.7175 5.5734
    1.9513 0.9035 0.9033 0.6432 0.7778 0.856
    HPDS: HPD credible interval based on least squared error
    HPDLT: HPD credible interval based on LINEX loss function

     | Show Table
    DownLoad: CSV
    Table 4.  95% and 90% Interval estimation of the parameter β when n=30,λ=5,β=2 and θ=2.
    T ML Bootstrap HPDS HPDLT
    ρ=0.5 ρ=1.0 ρ=1.5
    Xr=10:n 95% 1.7489 2.656 1.9078 2.2267 1.9125 2.1361 1.4056 2.02 1.1589 2.5683 1.4042 2.0164
    0.9071 0.3189 0.2236 0.6144 1.4094 0.6122
    90% 1.7495 2.6351 1.9145 2.2176 1.9101 2.1315 1.419 2.0052 1.1669 2.5609 1.4208 2.0078
    0.8856 0.3031 0.2214 0.5862 1.394 0.587
    Xr=15:n 95% 1.8257 2.514 1.9782 2.2249 1.9681 2.1799 1.5921 2.0159 1.4358 2.3563 1.5886 2.0187
    0.6883 0.2467 0.2118 0.4239 0.9205 0.4302
    90% 1.8301 2.5079 1.9788 2.2192 1.9491 2.1611 1.6049 2.0028 1.4378 2.3499 1.5977 2.0115
    0.6778 0.2404 0.212 0.3979 0.9122 0.4138
    Xk=20:n 95% 1.8536 2.4237 1.9789 2.2207 1.9671 2.1645 1.6915 2.0137 1.7138 2.2432 1.6882 2.0141
    0.5701 0.2418 0.1974 0.3222 0.5294 0.3259
    90% 1.8584 2.4104 1.9912 2.2169 1.9895 2.1515 1.7022 2.0032 1.7165 2.1922 1.6973 2.0022
    0.552 0.2257 0.162 0.301 0.4757 0.3049
    Xk=25:n 95% 1.805 2.2553 1.9888 2.2191 1.9878 2.1742 1.7504 2.0143 1.6093 2.0064 1.7519 2.013
    0.4503 0.2303 0.1864 0.2639 0.3971 0.2611
    90% 1.8102 2.2424 1.9989 2.2144 1.9978 2.1793 1.7576 2.0076 1.6094 2.0009 1.7599 2.0049
    0.4322 0.2155 0.1815 0.25 0.3914 0.245
    T1=18 95% 1.8275 2.2544 1.9882 2.2192 1.9718 2.1698 1.7269 2.0157 1.6462 2.1718 1.7317 2.0148
    0.4269 0.231 0.198 0.2888 0.5256 0.283
    90% 1.8337 2.2401 1.9982 2.2156 1.9899 2.175 1.7356 2.01 1.7526 2.156 1.7401 2.007
    0.4064 0.2174 0.1851 0.2744 0.4034 0.2669
    T1=24 95% 1.8581 2.2525 1.9785 2.2282 1.9698 2.1857 1.5349 2.016 1.556 2.4197 1.5334 2.0189
    0.3944 0.2497 0.2159 0.4812 0.8637 0.4855
    90% 1.8769 2.2362 1.9868 2.2208 1.9581 2.1738 1.5473 2.0055 1.5628 2.4043 1.5461 2.0107
    0.3593 0.234 0.2157 0.4581 0.8415 0.4646
    T2=30 95% 1.8678 2.2417 1.9978 2.217 1.9998 2.1812 1.7666 2.0137 1.7155 2.0919 1.7681 2.0111
    0.3739 0.2192 0.1814 0.2471 0.3764 0.243
    90% 1.899 2.2401 1.9995 2.2141 1.9971 2.1768 1.7726 2.0079 1.7156 2.0682 1.7737 2.005
    0.3411 0.2146 0.1797 0.2353 0.3526 0.2313
    T2=36 95% 1.9684 2.264 1.9784 2.2246 1.9668 2.1652 1.6365 2.0148 1.3802 2.3056 1.6354 2.0157
    0.2956 0.2462 0.1984 0.3784 0.9254 0.3803
    90% 1.9762 2.2549 1.9802 2.2186 1.988 2.1536 1.6502 2.0056 1.3855 2.2989 1.6443 2.0019
    0.2786 0.2384 0.1656 0.3553 0.9134 0.3576
    HPDS: HPD credible interval based on least squared error
    HPDLT: HPD credible interval based on LINEX loss function

     | Show Table
    DownLoad: CSV
    Table 5.  95% and 90% Interval estimation of the parameter θ when n=30,λ=5,β=2 and θ=2.
    T ML Bootstrap HPDS HPDLT
    ρ=0.5 ρ=1.0 ρ=1.5
    Xr=10:n 95% 0.7189 3.2086 0.4627 2.8471 1.4028 2.6683 1.2825 2.2195 1.5545 2.3768 1.625 2.7123
    2.4898 2.3844 1.2655 0.937 0.8223 1.0873
    90% 0.8205 3.1491 0.5627 2.9076 1.4032 2.6546 1.2825 2.1632 1.5636 2.3573 1.625 2.7123
    2.3286 2.3449 1.2514 0.8806 0.7937 1.0873
    Xr=15:n 95% 1.1368 2.8548 1.0812 2.6651 1.7381 2.4344 1.5355 2.1759 1.5868 2.215 1.7435 2.5074
    1.7181 1.5839 0.6963 0.6404 0.6282 0.7638
    90% 1.1726 2.8136 1.1012 2.6801 1.7442 2.4119 1.5555 2.1887 1.5964 2.1953 1.7435 2.47
    1.641 1.5789 0.6677 0.6332 0.5989 0.7265
    Xk=20:n 95% 1.3395 2.647 1.2596 2.4642 1.5794 2.2142 1.6333 2.1204 1.5763 2.0877 1.8405 2.4217
    1.3076 1.2046 0.6348 0.4871 0.5114 0.5812
    90% 1.3623 2.6044 1.3175 2.5177 1.8941 2.516 1.6333 2.104 1.5833 2.0654 1.8405 2.3672
    1.2421 1.2002 0.6219 0.4707 0.4821 0.5268
    Xk=25:n 95% 1.4763 2.5274 1.4401 2.3948 1.7151 2.3253 1.6987 2.0984 1.766 2.0669 1.87 2.3354
    1.0511 0.9546 0.6102 0.3997 0.3009 0.4654
    90% 1.5002 2.5044 1.4401 2.3452 1.723 2.2991 1.6987 2.0823 1.7685 2.0411 1.87 2.3101
    1.0043 0.9051 0.5761 0.3835 0.2726 0.4401
    T1=18 95% 1.4217 2.5676 1.3419 2.3888 1.6474 2.2818 1.6718 2.0971 1.6078 2.0092 1.8339 2.3407
    1.1459 1.0468 0.6344 0.4253 0.4014 0.5068
    90% 1.447 2.5303 1.3419 2.3042 1.589 2.1938 1.6818 2.104 1.615 1.9903 1.8339 2.3349
    1.0833 0.9623 0.6048 0.4222 0.3753 0.501
    T1=24 95% 1.0018 2.9762 0.9066 2.7038 1.6136 2.6236 1.4692 2.2074 1.4449 2.0777 1.7081 2.5718
    1.9744 1.7972 1.01 0.7382 0.6328 0.8638
    90% 1.082 2.8934 1.1066 2.7118 1.6187 2.6062 1.4892 2.2104 1.4482 2.0705 1.7081 2.5337
    1.8114 1.6052 0.9875 0.7213 0.6223 0.8256
    T2=30 95% 1.4913 2.4924 1.4721 2.3811 1.7284 2.2596 1.7319 2.1083 1.7869 2.0398 1.8609 2.2978
    1.001 0.909 0.5312 0.3765 0.2529 0.437
    90% 1.5199 2.4638 1.4721 2.3679 1.7331 2.2516 1.7419 2.1107 1.7968 2.0363 1.8709 2.316
    0.9438 0.8959 0.5185 0.3688 0.2395 0.4251
    T2=36 95% 1.2225 2.7506 1.1188 2.5164 1.8853 2.5395 1.5903 2.1688 1.5935 2.1895 1.7732 2.4407
    1.5281 1.3977 0.6542 0.5784 0.596 0.6675
    90% 1.2715 2.7046 1.1188 2.4309 1.648 2.2724 1.5913 2.1692 1.5942 2.1794 1.7732 2.4396
    1.4331 1.3121 0.6244 0.5799 0.5852 0.6664
    HPDS: HPD credible interval based on least squared error
    HPDLT: HPD credible interval based on LINEX loss function

     | Show Table
    DownLoad: CSV
    Table 6.  CI using McMc the parameters when n=30,λ=5,β=2 and θ=2.
    T λ β θ
    95% 90% 95% 90% 95% 90%
    Xr=10:n 2.2612 5.7079 2.397 5.2419 1.4673 2.2299 1.5146 2.1574 1.089 2.7555 1.1792 2.568
    3.4467 2.8449 0.7627 0.6427 1.6665 1.3888
    Xr=15:n 3.2011 5.774 3.3156 5.4827 1.5123 2.1693 1.5558 2.1016 0.9799 2.3119 1.0516 2.1579
    2.573 2.1672 0.657 0.5459 1.332 1.1063
    Xk=20:n 3.1594 5.2554 3.2552 5.0265 1.6151 2.2627 1.6567 2.1992 1.8853 2.991 1.9577 2.8866
    2.096 1.7713 0.6476 0.5425 1.1057 0.9289
    Xr=25:n 4.0678 5.8652 4.1599 5.6728 1.6377 2.2228 1.6794 2.1732 1.8064 2.7845 1.8634 2.6919
    1.7974 1.5129 0.5851 0.4938 0.9781 0.8285
    T1=18 2.2023 5.0454 3.3361 5.7186 1.4538 2.0967 1.4946 2.0382 1.0156 2.5102 1.0914 2.3422
    2.8431 2.3825 0.6429 0.5436 1.4946 1.2508
    T1=24 1.1682 5.5887 1.2789 5.2896 1.5336 2.1702 1.5748 2.1128 0.92 2.1664 0.9965 2.036
    2.4205 2.0107 0.6367 0.538 1.2464 1.0395
    T1=30 4.0586 5.978 4.1574 5.7554 1.606 2.2067 1.646 2.1437 1.8271 2.8706 1.8869 2.7562
    1.9194 1.598 0.6007 0.4976 1.0434 0.8693
    T1=36 4.0154 5.7364 4.1155 5.5376 1.6228 2.1806 1.6627 2.1322 1.7817 2.7215 1.8362 2.6184
    1.7209 1.4221 0.5577 0.4695 0.9398 0.7823

     | Show Table
    DownLoad: CSV

    1. The estimate of λ is overestimated except just few cases. Also, the Bayesian estimate of λ is best in terms of the bias in the case of LINEX loss function at ρ=1.0. Also, the ER supports the Bayesian estimate in the case of LINEX loss function.

    2. Again, the estimate of β is overestimated except just few cases. Also, the Bayesian estimate of λ behaves better in terms of the bias in the case of LINEX loss function. Similar argument is also can be stated for ER.

    3. One again, the estimate of θ is underestimated for most of the cases. Also, the ER shows that the Bayesian estimate of θ is the best in terms of the bias in the case of LINEX loss function.

    4. HPD credible interval estimation for λ behaves better in terms of the the interval width for the LINEX loss function when ρ=0.5.

    5. HPD credible interval estimation for β behaves better in terms of the the interval width for the SE loss function.

    6. HPD credible interval estimation for θ behaves better in terms of the the interval width for the LINEX loss function when ρ=1.0.

    Here we use one data set that will be used for the purpose of making comparisons between the estimators presented in this paper. The data set is taken from Nichols and Padgett [29] consisting of 100 observations on breaking stress of carbon fibers (in Gba). Dey, et al. [30] have fitted the Dagum distribution to this data set. The data are: 3.7, 2.74, 2.73, 3.11, 3.27, 2.87, 4.42, 2.41, 3.19, 3.28, 3.09, 1.87, 3.75, 2.43, 2.95, 2.96, 2.3, 2.67, 3.39, 2.81, 4.2, 3.31, 3.31, 2.85, 3.15, 2.35, 2.55, 2.81, 2.77, 2.17, 1.41, 3.68, 2.97, 2.76, 4.91, 3.68, 3.19, 1.57, 0.81, 1.59, 2, 1.22, 2.17, 1.17, 5.08, 3.51, 2.17, 1.69, 1.84, 0.39, 3.68, 1.61, 2.79, 4.7, 1.57, 1.08, 2.03, 1.89, 2.88, 2.82, 2.5, 3.6, 1.47, 3.11, 3.22, 1.69, 3.15, 4.9, 2.97, 3.39, 2.93, 3.22, 3.33, 2.55, 2.56, 3.56, 2.59, 2.38, 2.83, 1.92, 1.36, 0.98, 1.84, 1.59, 5.56, 1.73, 1.12, 1.71, 2.48, 1.18, 1.25, 4.38, 2.48, 0.85, 2.03, 1.8, 1.61, 2.12, 2.05, 3.65.

    The point and estimation techniques in Section (3), (4) and (5) can be applied to this data set throughout the steps below:

    1. Sorting the data set in ascending order.

    2. Applying the censoring scheme C-UHCS(m,r;T1,T2) using one arbitrary case of Type-Ⅱ censoring at X25:100=1.74 and arbitrary case of Type-Ⅰ censoring at T=2.4.

    3. Applying the point estimations of the parameters λ,β and θ using MLE, Tierney-Kadane and MCMC methods (MCMC are based on 15000 repetitions and 5000 burns).

    4. Calculating the 95% and 90% HPD credible intervals using MCMC based on squared error loss function.

    5. The results of the point and interval estimation of the unknown parameters are displayed in Tables 7 and 8.

    Table 7.  Pint estimation and the estimated variances of the unknown parameters using the real data set.
    Censoring MLE Tierney-Kadane MCMC
    SE LE(-.5) LE(1) LE(1.5) SE LE(-.5) LE(1) LE(1.5)
    x25:100 λ 6.6126 1.7776 1.6093 1.6511 1.6632 1.4560 1.5198 1.5566 1.5972
    6.6948 0.0482 0.0434 0.0431 0.0432 0.0508 0.0432 0.0434 0.0433
    β 2.5055 1.3135 1.3900 1.3308 1.3922 1.3444 1.3639 1.3737 1.3837
    2.5544 0.0460 0.0455 0.0411 0.0488 0.0469 0.0465 0.0468 0.0409
    θ 2.4578 2.7547 2.8720 2.8539 2.7734 2.4850 2.5687 2.6144 2.6644
    1.6470 0.0268 0.0218 0.0271 0.0263 0.0272 0.0247 0.0242 0.0246
    T=2.4 λ 2.9549 1.3291 1.3711 1.4598 1.4453 1.4568 1.4973 1.5183 1.5396
    2.8270 0.0364 0.0367 0.0337 0.0361 0.0367 0.0327 0.0327 0.0326
    β 1.7964 1.3560 1.3494 1.3707 1.3223 1.3630 1.3763 1.3830 1.3898
    1.7344 0.0340 0.0360 0.0345 0.0370 0.0328 0.0456 0.0337 0.0316
    θ 1.9669 2.7632 2.7933 2.8013 2.7365 2.3428 2.4169 2.4573 2.4995
    1.4303 0.0142 0.0170 0.0169 0.0146 0.0197 0.0181 0.0188 0.0123
    The first row represents the point estimation and the second row is the estimated variance.
    LE(a)=Linex loss function when ρ = a

     | Show Table
    DownLoad: CSV
    Table 8.  90% and 95% C.I and the interval widths of the unknown parameters using the real data set.
    Censoring MLE Tierney-Kadane MCMC
    95% 90% 95% 90% 95% 90%
    x25:100 λ 0.052 13.174 1.089 12.136 0.981 2.000 1.102 2.004 0.968 2.020 0.987 2.018
    13.122 11.046 1.019 0.902 1.052 1.031
    β 0.002 5.009 0.398 4.613 1.006 1.685 1.009 1.632 1.018 1.672 1.021 1.666
    5.007 4.215 0.639 0.622 0.654 0.645
    θ 0.844 4.072 1.099 3.817 1.693 3.200 1.884 3.173 1.833 3.183 1.900 3.129
    3.228 2.718 1.507 1.290 1.350 1.230
    T=2.4 λ 0.184 5.725 0.623 5.287 0.942 2.069 1.028 1.967 0.927 2.070 1.026 1.983
    5.541 4.665 1.127 0.938 1.144 0.957
    β 0.097 3.496 0.366 3.227 1.051 1.699 1.091 1.675 1.053 1.695 1.092 1.621
    3.399 2.862 0.648 0.584 0.642 0.529
    θ 0.565 3.369 0.787 3.147 1.713 3.148 1.793 3.057 1.742 3.131 1.790 3.057
    2.803 2.360 1.435 1.264 1.388 1.267
    The first row represents the interval estimation while the second row is the interval width.

     | Show Table
    DownLoad: CSV

    The underline selections in Tables (7) represent the best point estimation with minimum variances. Also, underline selections in Tables (8) represent the selected best interval estimates with minimum interval width.

    In this paper, parameters point and interval estimation of C-U hybrid censored model of the Dagum model under classical and Bayesian perspectives are discussed. The MLEs and asymptotic CIs for the interested parameters are computed. Since the Bayesian estimates of the involved parameters could not be obtained analytically, so Tierney and Kadane's approach have employed to obtain approximate Bayes estimates. It is found that, the performances of the Bayesian estimates based on LINEX loss function are superior than those of the corresponding ML estimators. Similar improvements are observed for the Bayesian estimates evaluated for different loss functions. However, depending on the different values of asymmetry parameter ρ, the ER of LINEX loss function may be smaller than those of MLEs. The points estimates in the real data set show that both of Tierney-Kadanean approximation and MCMC are comparable in terms of the estimated variances as well as in the interval estimation in terms on the interval width.

    The authors would like to thank the editor and referees for their helpful comments, which improved the presentation of the paper. Also, the authors would like to extend their sincere appreciation to the Deanship of Scientific Research, King Saud University for funding the Research Group (RG -1435-056).

    The authors have no conflict of interest.

    [1] M. Acheritogaray, P. Degond, A. Frouvelle, et al. Kinetic fomulation and global existence for the Hall-magneto-hydrodynamics system, Kinet. Relat. Mod., 4 (2011), 901–918.
    [2] S. A. Balbus and C. Terquem, Linear analysis of the Hall e_ect in protostellar disks, The Astrophysical Journal, 552 (2001), 235–247.
    [3] J. Bergh and J. L¨ofstrom, Inerpolation Spaces. An Introduction, Springer-Verlag, New York, 1976.
    [4] D. Chae, P. Degond and J. G. Liu, Well-posedness for Hall-magnetohydrodynamics, Ann. I. H. Poincaré-An, 31 (2014), 555–565
    [5] D. Chae and J. Lee, On the blow-up criterion and small data global existence for the Hallmagnetohydrodynamics, J. Di_er. Equations, 256 (2014), 3835–3858.
    [6] D. Chae and M. Schonbek, On the temporal decay for the Hall-magnetohydrodynamic equations, J. Di_er. Equations, 255 (2013), 3971–3982.
    [7] D. Chae and S. Weng, Singularity formation for the incompressible Hall-MHD equations without resistivity, Ann. I. H. Poincaré-An, 33 (2016), 1009–1022.
    [8] D. Chae and J. Wolf, On partial regularity for the steady Hall-magnetohydrodynamics system, Commun. Math. Phys., 339 (2015), 1147–1166.
    [9] D. Chae and J. Wolf, On partial regularity for the 3D nonstationary Hall magnetohydrodynamics equations on the plane, SIAM J. Math. Anal., 48 (2016), 443–469.
    [10] J. Fan, Y. Fukumoto, G. Nakamura, et al. Regularity criteria for the incompressible Hall-MHD system, Zamm-Z Angew. Math. Me., 95 (2015), 1156–1160.
    [11] J. Fan, X. Jia, G. Nakamura, et al. On well-posedness and blowup criteria for the
    [12] magnetohydrodynamics with the Hall and ion-slip e_ects, Z. Angew. Math. Phys., 66 (2015), 1695–1706.
    [13] J. Fan, A. Alsaedi, T. Hayat, et al. On strong solutions to the compressible Hallmagnetohydrodynamic system, Nonlinear Anal-Real, 22 (2015), 423–434.
    [14] J. Fan, H. Malaikah, S. Monaquel, et al. Global Cauchy problem of 2D generalized MHD equations, Monatsh. Math., 175 (2014), 127–131.
    [15] J. Fan and T. Ozawa, Regularity criteria for the incompressible MHD with the Hall or Ion-Slip e_ects, International Journal of Mathematical Analysis, 9 (2015), 1173–1186.
    [16] J. Fan, F. Li and G. Nakamura, Regularity criteria for the incompressible Hallmagnetohydrodynamic equations, Nonlinear Anal-Theor, 109 (2014), 173–179.
    [17] J. Fan, B. Samet and Y. Zhou, A regularity criterion for a generalized Hall-MHD system, Comput. Math. Appl., 74 (2017), 2438–2443.
    [18] J. Fan, B. Ahmad, T. Hayat, et al. On well-posedness and blow-up for the full compressible Hall-MHD system, Nonlinear Anal-Real, 31 (2016), 569–579.
    [19] M. Fei and Z. Xiang, On the blow-up criterion and small data global existence for the Hallmagnetohydrodynamics with horizontal dissipation, J. Math. Phys., 56 (2015), 051504.
    [20] T. G. Forbes, Magnetic reconnection in solar flares, Geophys. Astro. Fluid, 62 (1991), 15–36.
    [21] S. Gala, Regularity criterion for the 3D magneto-micropolar fluid equations in the Morrey-Campanato space, NoDEA-Nonlinear Di_., 17 (2010), 181–194.
    [22] S. Gala, On the regularity criteria for the three-dimensional micropolar fluid equations in the critical Morrey-Campanato space, Nonlinear Anal-Real, 12 (2011), 2142–2150.
    [23] S. Gala, A new regularity criterion for the 3D MHD equations in R3 , Commun. Pur. Appl. Anal., 11 (2012), 973–980.
    [24] J. Geng, X. Chen and S. Gala, On regularity criteria for the 3D magneto-micropolar fluid equations in the critical Morrey-Campanato space, Commun. Pur. Appl. Anal., 10 (2011), 583–592.
    [25] Z. Guo and S. Gala, Remarks on logarithmical regularity criteria for the Navier-Stokes equations, J. Math. Phys., 52 (2011), 063503.
    [26] F. He, B. Ahmad, T. Hayat, et al. On regularity criteria for the 3D Hall-MHD equations in terms of the velocity, Nonlinear Anal-Real, 32 (2016), 35–51.
    [27] X. Jia and Y. Zhou, Ladyzhenskaya-Prodi-Serrin type regularity criteria for the 3D incompressible MHD equations in terms of 3 × 3 mixture matrices, Nonlinearity, 28 (2015), 3289–3307.
    [28] Z. Jiang, Y. Wang and Y. Zhou, On regularity criteria for the 2D generalized MHD system, J. Math. Fluid Mech., 18 (2016), 331–341.
    [29] P. G. Lemarié-Rieusset, The Navier-Stokes equations in the critical Morrey-Campanato space, Rev. Mat. Iberoam., 23 (2007), 897–930.
    [30] M. J. Lighthill, Studies on magneto-hydrodynamic waves and other anisotropic wave motions, Philos. T. R. Soc. A, 252 (1960), 397–430.
    [31] S. Machihara and T. Ozawa, Interpolation inequalities in Besov spaces, Proc. Amer. Math. Soc., 131 (2003), 1553–1556.
    [32] Y. Meyer, P. Gerard and F. Oru, Inégalités de Sobolev précisées, Séminaire sur les équations aux Dérivées Partielles (Polytechnique), 1996 (1997), 1–8.
    [33] D. A. Shalybkov and V. A. Urpin, The Hall e_ect and the decay of magnetic fields, Astronomy and Astrophysics, 321 (1997), 685–690.
    [34] R.Wan and Y. Zhou, On global existence, energy decay and blow up criterions for the Hall-MHD system, J. Di_er. Equations, 259 (2015), 5982–6008.
    [35] Y. Wang and W. Zuo, On the blow-up criterion of smooth solutions for Hallmagnetohydrodynamics system with partial viscosity, Commun. Pur. Appl. Anal., 13 (2014), 1327–1336.
    [36] Z. Ye, Regularity criterion for the 3D Hall-magnetohydrodynamic equations involving the vorticity, Nonlinear Anal-Theor, 144 (2016), 182–193.
    [37] Y. Zhou and S. Gala, A new regularity criterion for weak solutions to the viscous MHD equations in terms of the vorticity field, Nonlinear Anal-Theor, 72 (2010), 3643–3648.
    [38] Y. Zhou and S. Gala, Regularity criteria in terms of the pressure for the Navier-Stokes Equations in the critical Morrey-Campanato space, Z. Anal. Anwend., 30 (2011), 83–93.
    [39] R. Wan and Y. Zhou, Low regularity well-posedness for the 3D generalized Hall-MHD system, Acta Appl. Math., 147 (2017), 95–111.
    [40] R. Wan and Y. Zhou, On global existence, energy decay and blow-up criteria for the Hall-MHD system, J. Di_er. Equations, 259 (2015), 5982–6008.
  • This article has been cited by:

    1. N. Balakrishnan, Erhard Cramer, Debasis Kundu, 2023, 9780123983879, 189, 10.1016/B978-0-12-398387-9.00014-3
    2. Khalaf S. Sultan, Walid Emam, The Combined-Unified Hybrid Censored Samples from Pareto Distribution: Estimation and Properties, 2021, 11, 2076-3417, 6000, 10.3390/app11136000
    3. Heba S. Mohammed, Mazen Nassar, Refah Alotaibi, Ahmed Elshahhat, Analysis of Adaptive Progressive Type-II Hybrid Censored Dagum Data with Applications, 2022, 14, 2073-8994, 2146, 10.3390/sym14102146
    4. Walid Emam, Yusra Tashkandy, Emilio Gómez-Déniz, The Weibull Claim Model: Bivariate Extension, Bayesian, and Maximum Likelihood Estimations, 2022, 2022, 1563-5147, 1, 10.1155/2022/8729529
    5. 2023, 9780123983879, 361, 10.1016/B978-0-12-398387-9.00023-4
    6. Walid Emam, Ghadah Alomani, Predictive modeling of reliability engineering data using a new version of the flexible Weibull model, 2023, 20, 1551-0018, 9948, 10.3934/mbe.2023436
    7. Mustafa M. Hasaballah, Yusra A. Tashkandy, Oluwafemi Samson Balogun, M. E. Bakr, Statistical inference of unified hybrid censoring scheme for generalized inverted exponential distribution with application to COVID-19 data, 2024, 14, 2158-3226, 10.1063/5.0201467
    8. Mustafa M Hasaballah, Oluwafemi Samson Balogun, M E Bakr, Bayesian estimation for the power rayleigh lifetime model with application under a unified hybrid censoring scheme, 2024, 99, 0031-8949, 105209, 10.1088/1402-4896/ad72b2
  • Reader Comments
  • © 2018 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(4645) PDF downloads(533) Cited by(6)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog