Processing math: 96%
Research article

Solvability and algorithm for Sylvester-type quaternion matrix equations with potential applications

  • Received: 07 January 2024 Revised: 08 May 2024 Accepted: 21 May 2024 Published: 19 June 2024
  • MSC : 15A09, 15A24, 15A29, 65F05

  • This article explores Sylvester quaternion matrix equations and potential applications, which are important in fields such as control theory, graphics, sensitivity analysis, and three-dimensional rotations. Recognizing that the determination of solutions and computational methods for these equations is evolving, our study contributes to the area by establishing solvability conditions and providing explicit solution formulations using generalized inverses. We also introduce an algorithm that utilizes representations of quaternion Moore-Penrose inverses to improve computational efficiency. This algorithm is validated with a numerical example, demonstrating its practical utility. Additionally, our findings offer a generalized framework in which various existing results in the area can be viewed as specific instances, showing the breadth and applicability of our approach. Acknowledging the challenges in handling large systems, we propose future research focused on further improving algorithmic efficiency and expanding the applications to diverse algebraic structures. Overall, our research establishes the theoretical foundations necessary for solving Sylvester-type quaternion matrix equations and introduces a novel algorithmic solution to address their computational challenges, enhancing both the theoretical understanding and practical implementation of these complex equations.

    Citation: Abdur Rehman, Ivan Kyrchei, Muhammad Zia Ur Rahman, Víctor Leiva, Cecilia Castro. Solvability and algorithm for Sylvester-type quaternion matrix equations with potential applications[J]. AIMS Mathematics, 2024, 9(8): 19967-19996. doi: 10.3934/math.2024974

    Related Papers:

    [1] Mohamed A. H. Sabry, Ehab M. Almetwally, Osama Abdulaziz Alamri, M. Yusuf, Hisham M. Almongy, Ahmed Sedky Eldeeb . Inference of fuzzy reliability model for inverse Rayleigh distribution. AIMS Mathematics, 2021, 6(9): 9770-9785. doi: 10.3934/math.2021568
    [2] Mohamed S. Eliwa, Essam A. Ahmed . Reliability analysis of constant partially accelerated life tests under progressive first failure type-II censored data from Lomax model: EM and MCMC algorithms. AIMS Mathematics, 2023, 8(1): 29-60. doi: 10.3934/math.2023002
    [3] Samah M. Ahmed, Abdelfattah Mustafa . Estimation of the coefficients of variation for inverse power Lomax distribution. AIMS Mathematics, 2024, 9(12): 33423-33441. doi: 10.3934/math.20241595
    [4] Refah Alotaibi, Mazen Nassar, Zareen A. Khan, Ahmed Elshahhat . Statistical analysis of stress–strength in a newly inverted Chen model from adaptive progressive type-Ⅱ censoring and modelling on light-emitting diodes and pump motors. AIMS Mathematics, 2024, 9(12): 34311-34355. doi: 10.3934/math.20241635
    [5] Abdulhakim A. Al-Babtain, Amal S. Hassan, Ahmed N. Zaky, Ibrahim Elbatal, Mohammed Elgarhy . Dynamic cumulative residual Rényi entropy for Lomax distribution: Bayesian and non-Bayesian methods. AIMS Mathematics, 2021, 6(4): 3889-3914. doi: 10.3934/math.2021231
    [6] Amal Hassan, Sudhansu Maiti, Rana Mousa, Najwan Alsadat, Mahmoued Abu-Moussa . Analysis of competing risks model using the generalized progressive hybrid censored data from the generalized Lomax distribution. AIMS Mathematics, 2024, 9(12): 33756-33799. doi: 10.3934/math.20241611
    [7] Refah Alotaibi, Mazen Nassar, Zareen A. Khan, Ahmed Elshahhat . Analysis of reliability index R=P(Y<X) for newly extended xgamma progressively first-failure censored samples with applications. AIMS Mathematics, 2024, 9(11): 32200-32231. doi: 10.3934/math.20241546
    [8] Rashad M. EL-Sagheer, Mohamed S. Eliwa, Khaled M. Alqahtani, Mahmoud El-Morshedy . Bayesian and non-Bayesian inferential approaches under lower-recorded data with application to model COVID-19 data. AIMS Mathematics, 2022, 7(9): 15965-15981. doi: 10.3934/math.2022873
    [9] Amit Singh Nayal, Bhupendra Singh, Abhishek Tyagi, Christophe Chesneau . Classical and Bayesian inferences on the stress-strength reliability R=P[Y<X<Z] in the geometric distribution setting. AIMS Mathematics, 2023, 8(9): 20679-20699. doi: 10.3934/math.20231054
    [10] Ahmed Elshahhat, Refah Alotaibi, Mazen Nassar . Statistical inference of the Birnbaum-Saunders model using adaptive progressively hybrid censored data and its applications. AIMS Mathematics, 2024, 9(5): 11092-11121. doi: 10.3934/math.2024544
  • This article explores Sylvester quaternion matrix equations and potential applications, which are important in fields such as control theory, graphics, sensitivity analysis, and three-dimensional rotations. Recognizing that the determination of solutions and computational methods for these equations is evolving, our study contributes to the area by establishing solvability conditions and providing explicit solution formulations using generalized inverses. We also introduce an algorithm that utilizes representations of quaternion Moore-Penrose inverses to improve computational efficiency. This algorithm is validated with a numerical example, demonstrating its practical utility. Additionally, our findings offer a generalized framework in which various existing results in the area can be viewed as specific instances, showing the breadth and applicability of our approach. Acknowledging the challenges in handling large systems, we propose future research focused on further improving algorithmic efficiency and expanding the applications to diverse algebraic structures. Overall, our research establishes the theoretical foundations necessary for solving Sylvester-type quaternion matrix equations and introduces a novel algorithmic solution to address their computational challenges, enhancing both the theoretical understanding and practical implementation of these complex equations.



    The inference of the stress-strength parameter R=P (Y<X) is an interesting topic in reliability analysis. The stress Y and the strength X are considered to be random variables. Let X represents the barrier's fire resistance, and Y represents the severity of the fire to which the barrier is exposed. If a unit's strength exceeds the stress applied to it, it works in the simplest stress-strength paradigm. Several authors have studied the reliability estimate of a single component stress-strength version using various lifetime distributions for the stress-strength random variate. The reliability and estimation of X and Y have been investigated under various distributional assumptions. There is a large literature on estimating R for various stress-strength distributions under various situations. For more examples see: Ghitany et al. [1], Chen and Cheng [2], Rezaei et al. [3] and Sharma [4], where the inference of reliability was performed using complete data as examples. The estimation of R under various filtering schemes was established by Genç [5], Krishna et al. [6] and Babayi and Khorram [7]. The inference for R based on upper or lower record data was introduced by Nadar and Kızılaslan [8], Tripathi et al.[9] and Asgharzadeh et al. [10]. Under ranked set sampling data, Akgül and Şenoğlu [11], Akgül et al. [12,13] and Safariyan et al. [14] investigated the R estimator. Al-Babtain et al. [15] introduced R estimation of stress-strength model for power-modified Lindley distribution. Sabry et al. [16] obtained stress-strength model and reliability estimation for extension of the exponential distribution. Yousef and Almetwally [17] obtained multi stress-strength R based on progressive first failure for Kumaraswamy model. Rezaei et al. [18] discussed estimation of P [Y<X] for generalized Pareto distribution. Kundu and Gupta [19] estimated P [Y<X] for Weibull distributions. Jose [20] discussed Estimation of stress-strength reliability using discrete phase type distribution. Almetwally et al. [21] discussed optimal plan of multi-stress-strength reliability Bayesian and non-Bayesian methods for the alpha power exponential model using progressive first failure. Kotz et al. [22], added a significant literature on the topic up to the year 2003, it can be consulted for more information. In all the previous studies there was a single component in the model, while in our proposed paper we are considering a multi-component system.

    A multi-component system refers to a system with more than one component. This system, which is made up of k independent and identical strength components, operates if s (1sk) or more of the components operate at the same time. The system is subjected to stress Y in its operating environment. The component's strengths, or the minimal stress required to create failure, are random variables with a distribution function that is independent and identically distributed. The s-out-of-k: G system corresponds to this model. The s-out-of-k system is used in a variety of industrial and military systems.

    In a multi-component system with k components, each component has independent and identically distributed (iid) random strengths X1,X2,...,Xk and each component is stressed randomly Y. The system would survive if and only if the strengths were greater than the stresses by at least s out of k,(1sk). Let Y,X1,X2,...,Xk be independent random variables, with G(y) being the continuous cumulative density function (cdf) of Y and F(x) being the common continuous cdf of X1,X2,...,Xk. Bhattacharyya and Johnson [23] introduced the reliability in a multi-component stress-strength (MSS) model, which is given by

    Rs,k=P[at  leastsofthe(X1,X2,...,Xk)exceedY]=ki=s(ki)[1F(y)]i[F(y)]kidG(y). (1.1)

    Although complete sample cases have been used to derive statistical inference on multi-component stress-strength models in the reliability literature, this subject has received little attention under censored data, particularly progressive Type-II censored sample. The following sources are closely relevant to the structure of our research.

    Kohansal [24] recently discussed estimating dependability in a multi-component stress-strength model, where data are observed using progressive Type-II censoring for the Kumaraswamy distribution. For the general class of inverse exponentiated distributions, Kizilaslan [25] studied the classical and Bayesian estimate of reliability in a multi-component stress-strength model based on complete data. Gunasekera [26] evaluated the reliability of a multi-component system using progressively Type-II censored sample with uniformly random removals. When the common parameter is known, the author obtains different interval inferences. Estimation is evaluated in the cases in which the common parameter is either known or unknown. Gunasekera [26] performed inference for known common parameter cases. In the Bayesian situation, we develop a uniformly-minimum-variance-unbiased estimator and use the Tierney-Kadane approximation technique. It should be noted that Gunasekera [26] did not consider the case of an unknown common parameter.

    Progressive Type-II censoring is commonly used in life-testing trials to analyze data under time and cost restrictions. Type-I and Type-II are the two most common censoring techniques. When using Type-I censoring, a test is almost finished at a specific time, however when using Type-II censoring, the test is finished after a certain amount of failure times have been logged. These censoring systems prevent live units from being removed between studies. The following is a description of the progressive Type II censoring scheme. On the life test, the experimenter first arranges N independent and identical units. When the first failure occurs, say at t(1), r1 units are eliminated at random from the remaining N1 surviving units. When the second failure occurs at t(2), r2 units are eliminated at random from the remaining Nr12 surviving units. When the nth failure occurs at time tn, the experiment ends, and the rn=Nnn1i=1ri surviving units are eliminated from the test. For various uses of this censoring in lifespan analysis, see Balakrishnan and Aggarwala [27] and Balakrishnan and Cramer [28]. For some useful implications on this censoring scheme, see Raqab and Madi [29], Wu et al. [30] and Rastogi and Tripathi [31].

    In this study, we attempt to estimate Rs,k when the underlying distribution is the power Lomax distribution (POLO). According to Rady et al. [32], the power Lomax POLO distribution is obtained by the power transformation X=Y1β, where the random variable X has pdf in Eq (1.2). The pdf of the POLO distribution is defined by

    f(x)=αβλxβ1(1+xβλ)α1. (1.2)

    The corresponding cumulative distribution function (CDF) and survival function of POLO distribution are given by

    F(x)=1(1+xβλ)α (1.3)

    and

    S(x)=(1+xβλ)α, (1.4)

    where α and β are shape parameters and λis a scale parameter.

    Due to physical constraints in these sectors, such as limited power supply, maintenance resources, and/or system design life, we suggest the POLO distribution for modeling reliability and life-testing data sets. The survival function studies the possibility of breakdowns of organisms, technical units, and other systems failing beyond a certain point in time. The hazard rate is used to measure a unit's lifetime over the duration of its lifetime distribution. The hazard rate (HRF) is a significant criterion for determining lifetime distributions since it measures the probability of failing or dying based on the age attained.

    When both stress and strength follow the POLO distribution, the major goal of this study is to estimate Rs,k using both classical and Bayesian techniques. This paper studies the feasibility of estimating Rs,k by using maximum likelihood under progressive Type-II censoring and constructing an asymptotic confidence interval when all the parameters are unknown, these are discussed in Section 1. In Section 2, the maximum product of spacing methodology is used to obtain the explicit estimator of Rs,k. Section 3 includes the construction of boot-p and boot-t confidence intervals. In Section 4, the Bayes estimates are determined under a squared error loss function (SELF) and a linear-exponential loss function (LINEX) using gamma informative priors. The Markov-Chain Monte-Carlo (MCMC) method is obtained for Bayesian computation. Also, the Bayesian credible and HPD credible intervals are constructed. A simulation study and real data set are analyzed in Sections 5 and 6 respectively. Finally, conclusion is presented in Section 7.

    Let X1,X2,...,Xk and Y be an iid random samples taken from the general class of power Lomax POLO(α1,β,λ) and POLO(α2,β,λ) distributions, respectively, with a common shape parameter βand scale parameter λ. Under this setup, the reliability Rs,k is as follows

    Rs,k=ki=s(ki)10[(1+yβλ)α1]i[1(1+yβλ)α1]kiα2βλyβ1(1+yβλ)α2d y, (2.1)

    where u=(1+yβλ)α2.

    Rs,k=ki=s(ki)10ui α1α2[1uα1α2]kid u=ki=s(ki)α2α1B(α2α1+i,ki+1), (2.2)

    since B(.,.) is the standard Beta function and k and i are integers.

    To achieve the desired MLE of Rs,k, the MLEs of α1,α2,β and λ are evaluated assuming the progressive Type-II censoring scheme. Suppose N systems are employed in a life-testing experiment, with a progressive Type-II censored sample {Xi1,Xi2,...,Xik}, i=1,2,,n is generated from the general class of power Lomax POLO(α1,β,λ), where the progressive censoring scheme is {K,k,r1,,rk}. Consider a progressively censored sample {Y1,Y2,...,Yn} obtained from another broad class of power Lomax POLO(α2,β,λ) using the censoring scheme {N,n,S1,,Sn}.

    Then likelihood function of α1,α2,β and λ is obtained as

    L(α1,α2,β,λ)=c1ni=1[c2kj=1f(xij)[1F(xij)]rj]f(yi)[1F(yi)]Si, (2.3)

    where the constants c1and c2are given by

    c1=N(NS11)..........(NS1....Sn1n+1),c2=K(Kr11)..........(Kr1....rk1k+1).

    As a result, the likelihood function is expressed as:

    L(data|α1,α2,β,λ)=c1cn2ni=1[kj=1α1βλxβ1ij(1+xβijλ)α11[1+xβijλ]rjα1]ni=1α2βλyβ1i(1+yβiλ)α21[1+yβiλ]Siα2. (2.4)

    The log-likelihood function is as follows:

    (α1,α2,β,λ|data)=nkln(α1)+nkln(α2)+n(k+1)ln(β)+n(k+1)ln(λ)+(β1)ni=1kj=1ln(xij)+(β1)ni=1ln(yi)ni=1kj=1((rj+1)α1+1)ln[1+xβijλ]ni=1((Si+1)α2+1)ln[1+yβiλ]. (2.5)

    The likelihood equations are constructed with respect to the variable of interest by calculating the derivatives of Eq (1.5) in the following forms

    α1=nkα1ni=1kj=1(rj+1)ln[1+xβijλ]=0, (2.6)
    α2=nα2ni=1(Si+1)ln[1+yβiλ]=0, (2.7)
    β=n(k+1)β+ni=1kj=1ln(xij)+ni=1ln(yi)
    ni=1kj=1((rj+1)α1+1)xβijln(xij)λ[1+xβijλ]ni=1((Si+1)α2+1)yβiln(yi)λ[1+yβiλ]=0 (2.8)

    and

    λ=n(k+1)λ+ni=1kj=1((rj+1)α1+1)xβijλ2[1+xβijλ]
    ni=1((Si+1)α2+1)yβiλ2[1+yβiλ]=0. (2.9)

    The parameters α1 and α2 MLEs are derived from the solutions of Eqs (1.6) and (1.7), respectively:

    ˆα1=nkni=1kj=1(rj+1)ln[1+xβijλ] (2.10)

    and

    ˆα2=nni=1(Si+1)ln[1+yβiλ]. (2.11)

    By incorporating ˆα1 and ˆα2 into Eq (1.2), the MLE of Rs,k becomes

    ˆRs,k=ki=s(ki)ˆα2ˆα1B(ˆα2ˆα1+i,ki+1). (2.12)

    We use the asymptotic distribution of MLE ˆRs,k to construct an asymptotic confidence interval for the multicomponent reliability Rs,k, also need to observe an asymptotic distribution of ˆθ=(ˆα1,ˆα2,ˆβ,ˆλ). In this regard, let E[I(θ)] denote the expected Fisher-information matrix, where

    I(θ)=Iij=[2θiθj],i,j=1,2,3,4.

    The elements of this matrix are obtained as

    I11=nkα21,I12=I21=0,I13=I31=ni=1kj=1(rj+1)xβijln(xij)λ[1+xβijλ],I14=I41=ni=1kj=1(rj+1)xβijλ2[1+xβijλ],
    I22=nα22,I23=I32=ni=1(Si+1)yβiln(yi)λ[1+yβiλ],I24=I42=ni=1(Si+1)yβiλ2[1+yβiλ],
    I33=n(k+1)β2+ni=1kj=1((rj+1)α1+1)xβij(ln(xij))2λ[1+xβijλ]ni=1kj=1((rj+1)α1+1)(xβijln(xij))2λ2[1+xβijλ]2+ni=1((Si+1)α2+1)yβi(ln(yi))2λ[1+yβiλ]ni=1((Si+1)α2+1)(yβiln(yi))2λ2[1+yβiλ]2,
    I44=n(k+1)λ2+2ni=1kj=1((rj+1)α1+1)xβijλ3[1+xβijλ]+ni=1kj=1((rj+1)α1+1)(xβij)2λ4[1+xβijλ]2+2ni=1((Si+1)α2+1)yβiλ3[1+yβiλ]+ni=1((Si+1)α2+1)(yβi)2λ4[1+yβiλ]2

    and

    I34=I43=ni=1kj=1((rj+1)α1+1)xβijln(xij)λ2[1+xβijλ]+ni=1kj=1((rj+1)α1+1)(xβij)2ln(xij)λ3[1+xβijλ]2ni=1((Si+1)α2+1)yβiln(yi)λ2[1+yβiλ]+ni=1((Si+1)α2+1)(yβi)2ln(yi)λ3[1+yβiλ]2.

    The asymptotic variances (AV) of ˆα1 and ˆα2 are calculated from Fisher information as given below

    V(ˆα1)=(E[Iij])1=(E[22α1])1=α21nk (2.13)

    and

    V(ˆα2)=(E[22α21])1=α22n. (2.14)

    The MLE of Rs,k is asymptotically normal with mean Rs,k, and a corresponding asymptotic variance given by:

    V(Rs,k)=4j=14i=1Rs,kθiRs,kθjI1ij=(Rs,kα1)2I111+2Rs,kα1Rs,kα2I112+(Rs,kα2)2I122, (2.15)

    for more details one may refer to Rao [33] It should be noticed that we obtain Rs,k and its derivatives for (s,k)=(1,3) and (2,4), independently to avoid the difficulty in deriving Rs,k.

    Therefore, 100(1γ)% confidence interval of Rs,k is constructed as given below

    ˆRs,k±zγ/2ˆV(ˆRs,k),

    where zγ/2 denotes the upper γ/2th quantile of the standard normal distribution and ˆV(ˆRs,k) is the MLE of V(Rs,k) which is obtained by replacing (α1,α2,β,λ) in V(Rs,k) by their corresponding MLEs.

    Ng et al. [34] presented the MPS approach. MPS technique determines the parameter values that makes the observed data as uniform as possible, with respect to a given quantitative measure of uniformity and based on a progressively Type-II censored sample

    S=n+1i=1(F(ti;θ)F(ti1;θ))n+1i=1(1F(ti;θ))Ri. (3.1)

    Cheng and Amin [35] defined as the geometric mean of the spacing as

    G=(n+1i=1Di)1n+1,

    where

    Di={D1=F(t1),Di=F(ti)F(ti1)=F(t2n),Dm+1=1F(tn),i=2,...,n,

    such that Di=1, depending on MPS method that was introduced by Cheng and Amin [36] and progressive Type-II censored scheme that was discussed by Balakrishnan and Aggarwala [27] and Ng et al. [37]. For more application of MPS on complete samples see Abu El Azm et al. [38], Sabry et al. [39] and Singh et al. [40].

    Then MPS of α1,α2,β, and λ are obtained as

    LMPS=C(F(x11)(1F(xnk)))ni=2kj=2(F(xij)F(xi1j1))ni=1kj=1(1F(xij))rj(F(y1)(1F(yn)))ni=2(F(yi)F(yi1))ni=1(1F(yi))Si=C((1(1+xβ11λ)α1)((1+xβnkλ)α1))((1(1+yβ1λ)α2)((1+yβnλ)α2))ni=2kj=2((1+xβi1j1λ)α1(1+xβijλ)α1)ni=1kj=1(1+xβijλ)α1rjni=2((1+yβi1λ)α2(1+yβiλ)α2)ni=1(1+yβiλ)α2Si. (3.2)

    The natural logarithmic likelihood functions are

    lnLMPS=ln(1(1+xβ11λ)α1)α1ln(1+xβnkλ)+ni=2kj=2ln[(1+xβi1j1λ)α1(1+xβijλ)α1]α1ni=2kj=2rjln(1+xβijλ)+ln(1(1+yβ1λ)α2)α2ln(1+yβnλ)+ni=2ln[(1+yβi1λ)α2(1+yβiλ)α2]α2ni=1Siln(1+yβiλ). (3.3)

    We partially differentiate Eq (2.3) with respect to the parameters α1,α2,β and λ, then equate them to zero to obtain the normal equations for the unknown parameters. The estimators for α1,α2,β and λ can be found by solving the equations below.

    lnLMPSα1=(1+xβ11λ)α1ln(1+xβ11λ)1(1+xβ1λ)α1ln(1+xβnkλ)ni=1kj=1rjln(1+xβijλ)+ni=2kj=2(1+xβijλ)α1ln(1+xβijλ)(1+xβi1j1λ)α1ln(1+xβi1j1λ)(1+xβi1j1λ)α1(1+xβijλ)α1, (3.4)
    lnLMPSα2=(1+yβ1λ)α2ln(1+yβ1λ)1(1+yβ1λ)α2ln(1+yβnλ)ni=1Siln(1+yβiλ)+ni=2(1+yβiλ)α2ln(1+yβiλ)(1+yβi1λ)α2ln(1+yβi1λ)(1+yβi1λ)α2(1+yβiλ)α2, (3.5)
    lnLMPSβ=α1xβ11ln(x11)(1+xβ11λ)α11λ[1(1+xβ1λ)α1]α1xβnkln(xnk)λ(1+xβnkλ)ni=1kj=1rjα1xβijln(xij)λ(1+xβijλ)ni=2kj=2α1[xβijln(xij)(1+xβijλ)α11xβi1j1ln(xi1j1)(1+xβi1j1λ)α11]λ[(1+xβi1j1λ)α1(1+xβijλ)α1]+α2yβ1ln(y1)(1+yβ1λ)α21λ[1(1+yβ1λ)α2]α2yβnln(yn)λ(1+yβnλ)ni=1Siα2yβiln(yi)λ(1+yβiλ)ni=2α2[yβiln(yi)(1+yβiλ)α21yβi1ln(yi1)(1+yβi1λ)α21]λ[(1+yβi1λ)α2(1+yβiλ)α2] (3.6)

    and

    lnLMPSλ=α1xβ11(1+xβ11λ)α11λ2[1(1+xβ1λ)α1]+α1xβnkλ2(1+xβnkλ)+ni=1kj=1rjα1xβijλ2(1+xβijλ)+ni=2kj=2α1[xβi1j1(1+xβi1j1λ)α11xβij(1+xβijλ)α11]λ2[(1+xβi1j1λ)α1(1+xβijλ)α1]α2yβ1(1+yβ1λ)α21λ2[1(1+yβ1λ)α2]α2yβnλ2(1+yβnλ)+ni=1Siα2yβiλ2(1+yβiλ)ni=2α2[yβi1(1+yβi1λ)α21yβi(1+yβiλ)α21]λ2[(1+yβi1λ)α2(1+yβiλ)α2]. (3.7)

    The MPS ˆα1(MPS), ˆα2(MPS), ˆβ(MPS) and ˆλ(MPS) can be obtained by solving simultaneously the likelihood equations

    lnLMPSα1=0,lnLMPSα2=0,lnLMPSβ=0andlnLMPSλ=0.

    The Eqs (2.4)–(2.7), on the other hand, must be solved numerically using a nonlinear optimization approach. By incorporating ˆα1(MPS) and ˆα2(MPS) into Eq (2.2), the MPS estimator of Rs,k becomes

    ˜Rs,k(MPS)=ki=s(ki)ˆα2(MPS)ˆα1(MPS)B(ˆα2(MPS)ˆα1(MPS)+i,ki+1). (3.8)

    According to point estimation, a parametric bootstrap interval informs us a lot about the population values of the quantity of our interest. Furthermore, CIs based on asymptotic results clearly make errors for small sample size. To determine the bootstrap CIs of α1,α2,β and λ, two parametric bootstrap methods are explained. The percentile bootstrap (Boot-p) CIs, introduced by Efron [41], and the CIs known as the bootstrap-t (Boot-t), which was presented by Hall [42]. Boot-t was developed using a studentized 'pivot' and it requires a variance estimator for the MLE of α1,α2,β, and λ.

    Step 1: Generate a bootstrap sample of size nk, {xi1,xi2,...,xik}from {xi1,xi2,...,xik},i=1,2,...,n,and generate a bootstrap sample of size n, {y1,y2,...,yn} from {y1,y2,...,yn}. Compute the bootstrap estimate of Rs,k, say ˆRs,k, using Eq (1.2).

    Step 2: Repeat Step 1, NBoot times.

    Step 3: Let G1(z)=P(ˆRs,kz) be the cumulative distribution function of ˆRs,k. Define ˆRs,k(bootp)=G11(z) for given z. The approximate bootstrap-p 100(1γ)% CI of ˆRs,k, is given by

    [ˆRs,k(bootp)(γ2),ˆRs,k(bootp)(1γ2)]. (4.1)

    Step 1: From the samples {xi1,xi2,...,xik},i=1,2,...,n and {y1,y2,...,yn},then compute ˆRs,k.

    Step 2: The same as the parametric Boot-p in Step 1.

    Step 3: Compute the T statistic defined as

    T=(ˆRs,kˆRs,k)^V(ˆRs,k),

    where V(ˆRs,k)can compute as in Eq (1.15).

    Step 4: Repeat Steps 1–3, NBoottimes.

    Step 5: Let G2(z)=P(Tz) be the cumulative distribution function of T for given z. Define ˆRs,k(boott)=ˆRs,k+G12(z)^σ2(ˆRs,k). Then, the approximate bootstrap-t 100(1γ)% CI of Rs,k, is given by

    [ˆRs,k(boott)(γ2),ˆRs,k(boott)(1γ2)]. (4.2)

    In this section, Bayesian estimates are obtained for the parameters that are assumed to be random, and the uncertainties in the parameters are described by a joint prior distribution, which has been developed before the collected failure data. The Bayesian approach is highly useful in reliability analysis because it may incorporate previous knowledge into the analysis. Bayesian estimates of the unknown parameters α1,α2,β and λ, as well as some lifetime parameter Rs,k under the SELF and LINEX loss function are developed. It is assumed here that the parameters α1,α2,β and λ are independent and follow the gamma prior distributions,

    {π1(α1)=αa111exp(b1α1)                , α1>0,π2(α2)=αa212exp(b2α2)                , α2>0,π3(β)=βa31exp(b3β)                 , β>0,π4(λ)=λa41exp(b4λ)                 , λ>0, (5.1)

    where all the hyperparameters ai and bi, i=1,2,3,4 are assumed to be known non negative numbers. To determine the elicit hyper-parameters of the independent joint prior (5.1), we can use ML estimates and variance-covariance matrix of MLE method. By equating mean and variance of gamma priors, the estimated of hyper-parameters can be written as

    aj=[1LLi=1^Ωij]21L1Li=1[^Ωij1LLi=1^Ωij]2;j=1,...,4,
    bj=1LLi=1^Ωij1L1Li=1[^Ωij1LLi=1^Ωij]2;j=1,...,4,

    where, L is the number of iteration and Ω is a vector of parameters.

    Combining the likelihood function in Eq (1.4) with the priors in Eq (4.1), resulted with the posterior distribution of the parameters α1,α2,β and λ indicated by π(α1,α2,β,λx_,y_), which can be expressed as

    π(α1,α2,β,λx_,y_)=π1(α1)π2(α2)π3(β)π4(λ)L(α1,α2,β,λx_,y_)0000π1(α1)π2(α2)π3(β)π4(λ)L(α1,α2,β,λx_,y_)dα1dα2dβdλ. (5.2)

    A commonly used loss function is the SELF, which is a symmetrical loss function that assigns equal losses to overestimation and underestimation. If ϕ is the parameter to be estimated by an estimator ˆϕ, then the square error loss function is defined as:

    L(ϕ,ˆϕ)=(ˆϕϕ)2.

    Therefore, the Bayes estimate of any function of α1,α2, β and λ, say g(α1,α2,β,λ) under the SELF can be obtained as

    ˆgBS(α1,α2,β,λx_,y_)=Eα,β,λx_,y(g(α1,α2,β,λ)),

    where

    Eα,β,λx_,y_(g(α1,α2,β,λ))=0000g(α1,α2,β,λ)π1(α1)π2(α2)π3(β)π4(λ)L(α1,α2,β,λx_,y_)dα1dα2dβdλ0000π1(α1)π2(α2)π3(β)π4(λ)L(α1,α2,β,λx_,y_)dα1dα2dβdλ. (5.3)

    Varian [43] considered the LINEX loss function L() for a parameter ϕ is given by

    L()=(ecc1),c0,=ˆϕϕ, (5.4)

    This loss function is suitable for situations where overestimation of is more costly than its underestimation. Zellner [44]. discussed Bayesian estimation and prediction using LINEX loss. Hence, under LINEX loss function in Eq (4.3), the Bayes estimate of a function g(α1,α2,β,λ) is

    ˆgBL(α1,α2,β,λx_,y_)=1clog[E(ecg(α1,α2,β,λ)x_,y_)],c0, (5.5)

    where

    E(ecg(α1,α2,β,λ)x_,y_)=0000ecg(α1,α2,β,λ)π1(α1)π2(α2)π3(β)π4(λ)L(α1,α2,β,λx_,y_)dα1dα2dβdλ0000π1(α1)π2(α2)π3(β)π4(λ)L(α1,α2,β,λx_,y_)dα1dα2dβdλ. (5.6)

    The multiple integrals in Eqs (4.3) and (4.6) can not be obtained analytically. Thus, the MCMC technique can be used to generate samples from the joint posterior density function in Eq (4.2). In order to be able to implement the MCMC technique, we consider the Gibbs within the Metropolis-Hasting samplers procedure. The Metropolis-Hasting and Gibbs sampling are two useful MCMC methods that have been widely used in statistics.

    The joint posterior density function of α1,α2, β and λ is obtained as follows:

    π(α1,α2,β,λx_,y_)αnk+a111αn+a212βnk+n+a31λnkn+a41exp(b1α1b2α2b3βb4λ)ni=1[kj=1xβ1ij[1+xβijλ](rj+1)α11]ni=1yβ1i[1+yβiλ](Si+1)α21, (5.7)

    Under the SELF and LINEX loss function, the Bayesian estimation of Rs,k is the mean of the posterior function in Eq (4.7), which can be written as shown below

    ˜Rs,k=0000Rs,kπ(α1,α2,β,λx_,y_)dα1dα2dβdλ. (5.8)

    The integral given in Eq (4.8) is obviously impossible to be calculated analytically. As a result, the Bayesian estimator of Rs,k, specifically the Gibbs sampling methods, is obtained using this approach. The next subsection get across the specifics of these strategies.

    The Gibbs sampling method is employed, which is a sub type of Monte-Carlo Markov Chain (MCMC) method, to create the Bayesian estimate of Rs,k and the related credible interval. The idea behind this method is to use posterior conditional density functions to generate posterior samples of parameters of interest. The posterior density function of the parameters of interest is produced by Eq (4.7). The posterior conditional density functions of α1,α2, β and λ can be expressed as follows using this equation:

    π1(α1α2,β,λ,x_,y_)αnk+a111exp(b1α1)ni=1kj=1xβ1ij[1+xβijλ](rj+1)α11, (5.9)
    π2(α2α1,β,λ,x_,y_)αn+a212exp(b2α2)ni=1yβ1i[1+yβiλ](Si+1)α21, (5.10)
    π3(βα1,α2,λ,x_,y_)βnk+n+a31exp(b3β)ni=1kj=1xβ1ij[1+xβijλ](rj+1)α11ni=1yβ1i[1+yβiλ](Si+1)α21 (5.11)

    and

    π4(λα1,α2,β,x_,y_)λnkn+a41exp(b4λ)ni=1kj=1xβ1ij[1+xβijλ](rj+1)α11ni=1yβ1i[1+yβiλ](Si+1)α21. (5.12)

    The conditional density function of α1,α2, β and λ cannot be obtained in the form of the well-known density functions, as shown by Eqs (4.9)–(4.12). In this case, we can utilize the Metropolis-Hasting (MH) technique, developed by Metropolis et al. [45], to create random-samples from the posterior density of α1,α2, β and λ using a normal proposal distribution.

    The steps of Gibbs sampling are described as follows:

    (1) Start with initial guess (α(0)1,α(0)2,β(0),λ(0)).

    (2) Set l=1.

    (3) Using the following M-H algorithm, generate α(l)1,α(l)2,β(l) and λ(l) from

    π1(α(l)1α(l1)2,β(l1),λ(l1),x_,y_), π2(α(l)2α(l)1,β(l1),λ(l1),x_,y_) ,π3(β(l)α(l)1,α(l)2,λ(l1),x_,y_) and π4(λ(l)α(l)1,α(l)2,β(l),x_,y_) with the normal proposal distributions

    N(α(l1)1,V(α1)),N(α(l1)2,V(α2)),N(β(l1),V(β))and   N(λ(l1),V(λ)),

    where V(α1),V(α2),V(β) and V(λ) can be obtained from the main diagonal in the inverse Fisher information matrix.

    (4) Generate a proposal α1from N(α(l1)1,V(α1)),α2from N(α(l1)2,V(α2)),βfrom N(β(l1),V(β))and λfrom N(λ(l1),V(λ)).

    (i) Evaluate the acceptance probabilities

    ηα1=min[1,π1(α1α(l1)2,β(l1),λ(l1),x_,y_)π1(α(l)1α(l1)2,β(l1),λ(l1),x_,y_)],ηα2=min[1,π2(α2α(l)1,β(l1),λ(l1),x_,y_)π2(α(l)2α(l)1,β(l1),λ(l1),x_,y_)]ηβ=min[1,π3(βα(l)1,α(l)2,λ(l1),x_,y_)π3(β(l)α(l)1,α(l)2,λ(l1),x_,y_)],ηλ=min[1,π4(λα(l)1,α(l)2,β(l),x_,y_)π4(λ(l)α(l)1,α(l)2,β(l),x_,y_)].}

    (ii) Generate a u1, u2,u3and u4 from a uniform (0,1) distribution.

    (iii) If u1<ηα1, accept the proposal and set α(l)1=α1, else set α(l)1=α(l1)1.

    (iv) If u2<ηα2, accept the proposal and set α(l)2=α2, else set α(l)2=α(l1)2.

    (iiv) If u3 <ηβ, accept the proposal and set β(l)=β, else set β(l)=β(l1).

    (v) If u4<ηλ, accept the proposal and set λ(l)=λ, else set λ(l)=λ(l1).

    (5) Compute R(l)s,k at (α(l)1,α(l)2,β(l),λ(l)).

    (6) Set l=l+1.

    (7) Repeat Steps (3)–(6), N times and obtain α(l)1,α(l)2,β(l),λ(l)and R(l)s,k,l=1,2,...N.

    (8) To compute the CRs of α1,α2, β, λ and Rs,k, ψ(l)k,k=1,2,3,4,5,(ψ1,ψ2,ψ3,ψ4,ψ5)=(α1,α2,β,λ,Rs,k) as ψ(1)k<ψ(2)k...<ψ(N)k, then the 100(1γ)%CRIs of ψk is

    (ψk(Nγ/2),ψk(N(1γ/2))).

    The first M simulated variants are discarded in order to ensure convergence and remove the affection of initial value selection. Then the selected samples are ψ(i)k,j=M+1,...N, for sufficiently large N.

    Based on the SELF, the approximate Bayes estimates of ψk is given by

    ˆψk=1NMNj=M+1ψ(j),k=1,2,3,4,5,

    the approximate Bayes estimates forψk, under LINEX loss function, from Eq (4.6) is

    ˆψk=1clog[1NMNj=M+1ecψ(j)],k=1,2,3,4,5.

    In this section random samples are generated from POLO distribution using the R-coding. The simulation experiment is carried out to determine the reliability coefficient and compare the suggested methods.

    The performance of the parameters and Rs,k is compared using different sample sizes based on Monte Carlo simulation, where k = 5, and s = 2, 3, and 4. A total of 5, 000 random samples of size are n1=10,n2=15,n3=15,n4=10,n5=12 created from the stress and strength populations and the sample size of censored sample are chosen as (m1=7,m2=10,m3=10,m4=8,m5=9), and (m1=9,m2=13,m3=13,m4=9,m5=11). This section examines some empirical data derived from Monte-Carlo simulations to see how the proposed methods perform with different sample sizes. For (aj,bj);j=,...,4, we may use the estimate and variance-covariance matrix of the MLE approach to elicit hyper-parameters of the independent joint prior. The estimated hyper-parameters are calculated by equating the mean and variance of gamma priors. For the random variables generating, the values of the parameters α1,α2,λ, and β are chosen as follows:

    α1=0.5,λ=2,α2=3,β=1.2;α1=1.5,λ=0.5,α2=0.5,β=1.2;α1=1.3,λ=1.2,α2=2,β=1.5.

    Tables 16 show the simulation results of MLEs, MPS, Bayesian estimates, and interval estimations of Rs,k. All of the results are calculated using a total of 5000 simulated samples. The simulation methods are compared using the criteria of parameters estimation, the comparison is performed by calculating the Bias, the mean of square error (MSE), the length of asymptotic and bootstrap confidence intervals (L.CI) and coverage probability (CP) for each estimation method. In simulation results Tables 13, for each sample-size mi, scheme (S), and estimator, the first four values represent the average bias and MSE of the parameters model, and the next three values represent the estimated risk for the corresponding stress-strength reliability when k=5 and s=2,3,4, respectively. In simulation results for CI Tables 46, for each sample size mi, and scheme (S), in MLE, and MPS estimators, the first four values represent the average length of asymptotic CI (L.CI), CP, Boot-p (BP), and Boot-t (BT) of the parameters model, and the next three values represent the average length of delta CI of risk for the corresponding Rs,k when k=5 and s=2,3,4, respectively. While, in Bayesian estimation, the average length of credible CI (L.CI) of the parameters model and risk for the corresponding stress-strength reliability when k=5 and s=2,3,4, respectively.

    Table 1.  Bias, MSE for MLE, MPS and Bayesian when α1=1.3, λ=1.2, α2=2, β=1.5.
    α1=1.3,λ=1.2,α2=2,β=1.5
    MLE MPS SELF LELF (c=0.5) LELF (c=0.5)
    scheme mi Bias MSE Bias MSE Bias MSE Bias MSE Bias MSE
    I 7, 10, 10, 8, 9 α1 0.1166 0.6711 -0.9288 0.9067 0.0195 0.0154 0.0173 0.0153 0.0128 0.0150
    α2 0.4074 2.2733 -1.5609 2.5104 0.0082 0.0201 0.0054 0.0199 0.0000 0.0197
    λ 0.0360 1.4289 -1.1075 1.3954 -0.0260 0.0177 -0.0285 0.0180 -0.0336 0.0179
    β 0.5132 0.3687 0.2303 0.1479 0.1665 0.0382 0.1632 0.0368 0.1567 0.0341
    R2,5 0.0108 0.0054 -0.0825 0.0105 -0.0035 0.0010 -0.0034 0.0009 -0.0032 0.0009
    R3,5 0.0191 0.0081 -0.0878 0.0117 -0.0031 0.0013 -0.0030 0.0013 -0.0028 0.0013
    R4,5 0.0224 0.0076 -0.0745 0.0084 -0.0021 0.0011 -0.0020 0.0011 -0.0018 0.0011
    9, 13, 13, 9, 11 α1 0.2187 0.6604 -0.6722 0.5524 0.0307 0.0120 0.0293 0.0119 0.0263 0.0115
    α2 0.3554 1.0477 -1.2310 1.6946 0.0001 0.0152 -0.0016 0.0152 -0.0050 0.0150
    λ 0.1044 1.3836 -0.9407 1.1079 -0.0350 0.0132 -0.0365 0.0134 -0.0397 0.0128
    β 0.4092 0.2326 0.2519 0.1337 0.1169 0.0202 0.1149 0.0196 0.1108 0.0184
    R2,5 0.0093 0.0041 -0.0713 0.0083 -0.0068 0.0008 -0.0067 0.0008 -0.0066 0.0008
    R3,5 0.0155 0.0059 -0.0763 0.0094 -0.0071 0.0011 -0.0070 0.0011 -0.0069 0.0010
    R4,5 0.0176 0.0053 -0.0651 0.0068 -0.0059 0.0009 -0.0058 0.0009 -0.0057 0.0009
    II 7, 10, 10, 8, 9 α1 -0.9612 0.9661 -0.1381 0.4285 -0.1174 0.0281 -0.1203 0.0291 -0.1262 0.0283
    α2 -1.6005 2.6358 -0.0855 1.1646 -0.0849 0.0309 -0.0882 0.0320 -0.0947 0.0314
    λ -1.1641 1.5252 -0.2988 0.8392 0.1122 0.0311 0.1089 0.0299 0.1023 0.0276
    β 0.6415 0.6517 0.2548 0.1444 -0.0634 0.0142 -0.0652 0.0145 -0.0688 0.0144
    R2,5 -0.0900 0.0130 0.0084 0.0044 0.0123 0.0013 0.0124 0.0014 0.0128 0.0012
    R3,5 -0.0946 0.0142 0.0148 0.0064 0.0159 0.0020 0.0162 0.0020 0.0167 0.0018
    R4,5 -0.0796 0.0101 0.0173 0.0057 0.0157 0.0017 0.0159 0.0018 0.0164 0.0017
    9, 13, 13, 9, 11 α1 -0.6812 0.5859 -0.0313 0.4233 -0.0134 0.0110 -0.0149 0.0111 -0.0178 0.0112
    α2 -1.2476 1.7928 -0.0332 1.1215 -0.0272 0.0150 -0.0290 0.0153 -0.0326 0.0158
    λ -0.9660 1.2791 -0.2640 0.8265 0.0144 0.0119 0.0129 0.0118 0.0099 0.0117
    β 0.5348 0.4175 0.2008 0.0914 0.0358 0.0088 0.0345 0.0086 0.0319 0.0083
    R2,5 -0.0781 0.0101 -0.0046 0.0034 -0.0013 0.0007 -0.0013 0.0007 -0.0011 0.0007
    R3,5 -0.0829 0.0112 -0.0018 0.0046 -0.0008 0.0009 -0.0007 0.0009 -0.0006 0.0010
    R4,5 -0.0703 0.0080 0.0009 0.0039 -0.0002 0.0008 -0.0001 0.0008 0.0000 0.0008

     | Show Table
    DownLoad: CSV
    Table 2.  MLE, MPS and Bayesian when α1=1.5, λ=0.5, α2=0.5, β=1.2.
    α1=1.5,λ=0.5,α2=0.5,β=1.2
    MLE MPS SELF LELF (c=0.5) LELF (c=0.5)
    scheme mi Bias MSE Bias MSE Bias MSE Bias MSE Bias MSE
    I 7, 10, 10, 8, 9 α1 -1.1344 1.3358 -1.0874 1.2479 -0.0341 0.0161 -0.0364 0.0164 -0.0410 0.0169
    α2 -0.3722 0.1427 -0.3401 0.1226 -0.0741 0.0175 -0.0763 0.0179 -0.0808 0.0187
    λ -1.0672 1.1819 -1.0044 1.0842 0.0140 0.0172 0.0115 0.0171 0.0065 0.0169
    β 1.0407 1.2838 0.6646 0.5397 0.2643 0.0755 0.2606 0.0733 0.2530 0.0689
    R2,5 0.0186 0.0046 0.0436 0.0060 -0.0359 0.0052 -0.0368 0.0053 -0.0386 0.0056
    R3,5 0.0147 0.0026 0.0334 0.0035 -0.0250 0.0027 -0.0257 0.0027 -0.0269 0.0028
    R4,5 0.0101 0.0011 0.0223 0.0015 -0.0157 0.0011 -0.0161 0.0011 -0.0169 0.0012
    9, 13, 13, 9, 11 α1 -0.8252 0.7569 -0.8105 0.7274 0.0237 0.0118 0.0222 0.0116 0.0192 0.0113
    α2 -0.2826 0.0868 -0.2493 0.0715 0.0105 0.0081 0.0092 0.0081 0.0066 0.0080
    λ -0.9980 1.0273 -0.9458 0.9331 -0.0390 0.0135 -0.0408 0.0137 -0.0442 0.0142
    β 0.9210 0.9422 0.6533 0.4777 0.2694 0.0761 0.2655 0.0738 0.2574 0.0693
    R2,5 -0.0033 0.0032 0.0254 0.0038 0.0003 0.0023 -0.0001 0.0024 -0.0009 0.0024
    R3,5 -0.0016 0.0017 0.0195 0.0021 0.0007 0.0013 0.0005 0.0013 -0.0001 0.0013
    R4,5 -0.0007 0.0007 0.0130 0.0009 0.0007 0.0005 0.0005 0.0005 0.0002 0.0005
    II 7, 10, 10, 8, 9 α1 0.1402 0.6173 0.0989 0.5211 0.0584 0.0192 0.0558 0.0187 0.0507 0.0177
    α2 -0.0594 0.0195 0.0955 0.0192 0.0837 0.0195 0.0814 0.0188 0.0767 0.0175
    λ -0.6457 0.5038 0.5071 0.4568 -0.0841 0.0258 -0.0874 0.0268 -0.0941 0.0288
    β 0.8474 0.7433 0.6247 0.4487 0.4021 0.1670 0.3946 0.1606 0.3788 0.1477
    R2,5 -0.0241 0.0080 0.0476 0.0079 0.0292 0.0038 0.0286 0.0037 0.0273 0.0036
    R3,5 -0.0157 0.0042 0.0368 0.0046 0.0223 0.0021 0.0218 0.0021 0.0209 0.0020
    R4,5 -0.0094 0.0018 0.0248 0.0021 0.0148 0.0009 0.0145 0.0009 0.0139 0.0009
    9, 13, 13, 9, 11 α1 0.1264 0.5814 0.0892 1.1950 0.0430 0.0103 0.0414 0.0100 0.0383 0.0096
    α2 -0.0011 0.0140 0.0977 0.1798 0.0754 0.0138 0.0737 0.0134 0.0705 0.0126
    λ -0.3463 0.4628 -0.4952 0.8521 -0.0601 0.0153 -0.0620 0.0157 -0.0657 0.0165
    β 0.8365 0.4767 0.5830 0.3726 0.3238 0.1086 0.3183 0.1048 0.3070 0.0973
    R2,5 -0.0059 0.0051 0.0397 0.0061 0.0283 0.0028 0.0279 0.0028 0.0269 0.0027
    R3,5 -0.0031 0.0027 0.0306 0.0035 0.0214 0.0016 0.0210 0.0016 0.0203 0.0015
    R4,5 -0.0015 0.0011 0.0205 0.0016 0.0142 0.0007 0.0139 0.0007 0.0135 0.0007

     | Show Table
    DownLoad: CSV
    Table 3.  Bias, and MSE for Bias, and MSE for MLE, MPS and Bayesian when α1=0.5, λ=2, α2=3, β=1.2.
    α1=0.5,λ=2,α2=3,β=1.2
    MLE MPS SELF LELF (c=0.5) LELF (c=0.5)
    S mi Bias MSE Bias MSE Bias MSE Bias MSE Bias MSE
    I 7, 10, 10, 8, 9 α1 -0.3231 0.1160 -0.3187 0.1116 -0.2883 0.0844 -0.2899 0.0853 -0.2928 0.0869
    α2 -2.3918 5.9435 -2.4610 6.1722 -0.1364 0.0377 -0.1406 0.0396 -0.1487 0.0434
    λ -0.5085 0.8239 -0.6595 0.7020 0.1527 0.0370 0.1488 0.0354 0.1410 0.0321
    β -0.6135 0.5168 -0.8425 0.7978 -0.4011 0.1816 -0.4138 0.1937 -0.4380 0.2179
    R2,5 -0.0546 0.0041 -0.0668 0.0055 0.0136 0.0002 0.0136 0.0002 0.0136 0.0002
    R3,5 -0.1098 0.0156 -0.1319 0.0202 0.0489 0.0024 0.0491 0.0024 0.0494 0.0025
    R4,5 -0.1594 0.0314 -0.1884 0.0397 0.1197 0.0146 0.1202 0.0147 0.1212 0.0149
    9, 13, 13, 9, 11 α1 -0.1979 0.0580 -0.2102 0.0571 -0.2177 0.0493 -0.2191 0.0499 -0.2219 0.0512
    α2 -1.7902 3.7333 -1.9899 4.1974 -0.0669 0.0166 -0.0689 0.0171 -0.0730 0.0182
    λ -0.1231 0.6531 -0.3797 0.4140 0.0781 0.0167 0.0762 0.0162 0.0725 0.0153
    β -0.5686 0.5054 -0.8537 0.7814 -0.2391 0.0702 -0.2442 0.0733 -0.2542 0.0797
    R2,5 -0.0364 0.0021 -0.0465 0.0029 0.0116 0.0001 0.0117 0.0001 0.0118 0.0001
    R3,5 -0.0770 0.0088 -0.0970 0.0119 0.0398 0.0016 0.0400 0.0017 0.0404 0.0017
    R4,5 -0.1161 0.0190 -0.1446 0.0251 0.0912 0.0087 0.0917 0.0088 0.0928 0.0090
    II 7, 10, 10, 8, 9 α1 -0.0258 0.0328 -0.0717 0.0388 -0.1630 0.0299 -0.1643 0.0304 -0.1669 0.0312
    α2 -0.3955 1.1937 -0.4753 2.1640 -0.0271 0.0214 -0.0299 0.0217 -0.0353 0.0224
    λ 0.8106 1.7667 0.4953 1.5316 0.0554 0.0230 0.0526 0.0224 0.0470 0.0213
    β -0.6871 0.5263 -0.8098 0.7180 -0.2307 0.0704 -0.2361 0.0736 -0.2466 0.0800
    R2,5 -0.0080 0.0002 -0.0070 0.0003 0.0094 0.0001 0.0095 0.0001 0.0096 0.0001
    R3,5 -0.0186 0.0012 -0.0147 0.0016 0.0311 0.0011 0.0313 0.0011 0.0316 0.0011
    R4,5 -0.0291 0.0037 -0.0204 0.0046 0.0687 0.0054 0.0691 0.0055 0.0700 0.0056
    9, 13, 13, 9, 11 α1 -0.0368 0.0269 -0.0381 0.0353 -0.1543 0.0266 -0.1555 0.0269 -0.1578 0.0277
    α2 -0.2684 0.5381 -0.4694 1.7106 -0.0234 0.0122 -0.0251 0.0123 -0.0284 0.0126
    λ 0.5297 0.6023 0.4497 1.0560 0.0447 0.0141 0.0430 0.0138 0.0398 0.0133
    β -0.6739 0.5091 -0.8528 0.6536 -0.1712 0.0409 -0.1745 0.0425 -0.1811 0.0456
    R2,5 -0.0029 0.0001 -0.0094 0.0003 0.0091 0.0001 0.0092 0.0001 0.0093 0.0001
    R3,5 -0.0052 0.0009 -0.0211 0.0017 0.0299 0.0010 0.0300 0.0010 0.0304 0.0010
    R4,5 -0.0049 0.0029 -0.0326 0.0046 0.0651 0.0048 0.0656 0.0049 0.0664 0.0050

     | Show Table
    DownLoad: CSV
    Table 4.  Length of CI for MLE, MPS and Bayesian when α1=1.3, λ=1.2, α2=2, β=1.5.
    α1=1.3,λ=1.2,α2=2,β=1.5
    MLE MPS Bayesian
    mi L.CI CP BP BT L.CI CP BP BT L.CI L.CI L.CI
    I 7, 10, 10, 8, 9 α1 3.1817 0.9440 0.2955 0.1936 0.8224 96% 0.0581 0.0582 0.4513 0.4488 0.4471
    α2 5.6962 0.9520 0.5681 0.3573 1.0666 96% 0.0755 0.0755 0.5389 0.5387 0.5333
    λ 4.6884 0.9440 0.4749 0.3172 1.6126 96% 0.0863 0.0837 0.5159 0.5188 0.5120
    β 1.2733 0.9520 0.0929 0.0954 1.2084 69% 0.0580 0.0580 0.3983 0.3914 0.3873
    R2,5 0.2859 0.9440 0.0122 0.0121 0.2385 96% 0.0108 0.0109 0.1174 0.1181 0.1178
    R3,5 0.3451 0.9520 0.0124 0.0124 0.2481 95% 0.0111 0.0113 0.1367 0.1383 0.1374
    R4,5 0.3301 0.9440 0.0108 0.0108 0.2086 95% 0.0094 0.0092 0.1243 0.1257 0.1248
    9, 13, 13, 9, 11 α1 3.0721 0.9517 0.1040 0.0705 1.2440 95% 0.0358 0.0347 0.4130 0.4126 0.4104
    α2 4.7821 0.9571 0.1288 0.0923 1.6616 96% 0.0490 0.0489 0.4800 0.4819 0.4812
    λ 4.5989 0.9517 0.1631 0.1365 1.8531 95% 0.0745 0.0726 0.4028 0.4030 0.3994
    β 1.0024 0.9571 0.0614 0.0610 1.0397 80% 0.0444 0.0444 0.3027 0.2984 0.2907
    R2,5 0.2479 0.9517 0.0111 0.0110 0.2230 95% 0.0100 0.0101 0.1038 0.1040 0.1024
    R3,5 0.2958 0.9571 0.0122 0.0121 0.2359 95% 0.0109 0.0111 0.1203 0.1212 0.1206
    R4,5 0.2768 0.9517 0.0099 0.0098 0.2006 95% 0.0090 0.0090 0.1088 0.1097 0.1088
    II 7, 10, 10, 8, 9 α1 1.3806 0.9314 0.1942 0.1499 2.5109 93% 0.1190 0.1191 0.4597 0.4629 0.4572
    α2 1.0686 0.9580 0.1950 0.2615 4.2213 92% 0.1886 0.1897 0.5881 0.5931 0.5861
    λ 1.6186 0.9640 0.2098 0.2074 3.3981 97% 0.1570 0.1576 0.5437 0.5339 0.5224
    β 1.9230 0.9580 0.1392 0.1832 1.1058 93% 0.0511 0.0512 0.3834 0.3854 0.3907
    R2,5 0.2735 0.9484 0.0685 0.0653 0.2585 95% 0.0120 0.0120 0.1281 0.1293 0.1231
    R3,5 0.2853 0.9580 0.0823 0.0742 0.3077 95% 0.0141 0.0139 0.1532 0.1556 0.1585
    R4,5 0.2413 0.9400 0.0792 0.0694 0.2890 95% 0.0124 0.0123 0.1422 0.1445 0.1482
    9, 13, 13, 9, 11 α1 1.3702 0.9580 0.1492 0.1190 2.5499 98% 0.1096 0.1098 0.4036 0.4036 0.4051
    α2 1.0091 0.9600 0.1590 0.1231 4.1534 96% 0.1741 0.1731 0.4666 0.4688 0.4688
    λ 1.3083 0.9696 0.1492 0.1743 3.4135 96% 0.1449 0.1459 0.4299 0.4255 0.4179
    β 1.4230 0.9600 0.0572 0.0571 0.8867 91% 0.0399 0.0400 0.3372 0.3362 0.3339
    R2,5 0.2482 0.9580 0.0146 0.0138 0.2273 96% 0.0095 0.0094 0.1000 0.1011 0.1040
    R3,5 0.2592 0.9600 0.0175 0.0165 0.2661 95% 0.0114 0.0115 0.1181 0.1194 0.1229
    R4,5 0.2183 0.9580 0.0160 0.0158 0.2455 95% 0.0113 0.0111 0.1085 0.1096 0.1129

     | Show Table
    DownLoad: CSV
    Table 5.  Length of CI for MLE, MPS and Bayesian when α1=0.5, λ=2, α2=3, β=1.2.
    α1=0.5,λ=2,α2=3,β=1.2
    MLE MPS Bayesian
    S mi L.CI CP BP BT L.CI CP BP BT L.CI L.CI L.CI
    I 7, 10, 10, 8, 9 α1 0.5388 95.84% 0.0236 0.0231 0.4229 95.80% 0.0197 0.0194 0.1330 0.1326 0.1317
    α2 2.8524 97.40% 0.1238 0.1188 1.8529 97.80% 0.0887 0.0891 0.5233 0.5345 0.5504
    λ 3.1342 97.00% 0.1412 0.1411 2.9503 97.60% 0.0950 0.0937 0.4379 0.4330 0.4169
    β 1.4704 95.40% 0.0652 0.0631 1.4704 92.20% 0.0554 0.0556 0.5403 0.5621 0.6123
    R2,5 0.1337 95.40% 0.0058 0.0057 0.1337 93.00% 0.0059 0.0059 0.0061 0.0061 0.0060
    R3,5 0.2334 95.00% 0.0101 0.0102 0.2334 92.60% 0.0102 0.0101 0.0267 0.0267 0.0265
    R4,5 0.3032 94.80% 0.0143 0.0142 0.3032 92.40% 0.0120 0.0117 0.0762 0.0767 0.0767
    9, 13, 13, 9, 11 α1 0.4229 97.20% 0.0220 0.0195 0.4174 97.20% 0.0190 0.0181 0.1163 0.1164 0.1164
    α2 1.8529 96.00% 0.0899 0.0839 1.7953 97.20% 0.0610 0.0574 0.4323 0.4336 0.4438
    λ 2.9503 96.60% 0.1302 0.1298 2.1869 98.00% 0.0893 0.0877 0.3929 0.3876 0.3805
    β 1.1279 95.20% 0.0524 0.0524 1.1005 90.40% 0.0429 0.0424 0.4281 0.4370 0.4542
    R2,5 0.1125 95.60% 0.0051 0.0051 0.1123 92.60% 0.0050 0.0048 0.0031 0.0031 0.0030
    R3,5 0.2110 95.60% 0.0085 0.0087 0.2126 92.40% 0.0083 0.0083 0.0172 0.0171 0.0169
    R4,5 0.2918 95.60% 0.0126 0.0124 0.2974 92.20% 0.0112 0.0115 0.0607 0.0605 0.0602
    II 7, 10, 10, 8, 9 α1 0.7336 98.40% 0.5691 0.5516 0.7204 95.60% 0.0314 0.0314 0.2192 0.2185 0.2188
    α2 4.1699 96.40% 0.4572 0.4431 5.4627 93.84% 0.2435 0.2422 0.5421 0.5431 0.5481
    λ 4.3129 97.20% 0.3872 0.3473 4.4502 94.60% 0.1981 0.1915 0.5373 0.5312 0.5286
    β 0.9525 96.00% 0.0504 0.0504 0.9795 96.20% 0.0444 0.0427 0.4991 0.5135 0.5350
    R2,5 0.0434 95.80% 0.0021 0.0021 0.0613 90.80% 0.0027 0.0028 0.0099 0.0099 0.0099
    R3,5 0.1162 95.80% 0.0059 0.0059 0.1459 89.80% 0.0065 0.0067 0.0381 0.0385 0.0387
    R4,5 0.2187 96.40% 0.0118 0.0116 0.2533 87.80% 0.0116 0.0116 0.0974 0.0984 0.0988
    9, 13, 13, 9, 11 α1 0.6389 99.80% 0.0646 0.0630 0.7216 98.20% 0.0312 0.0311 0.2010 0.2022 0.2044
    α2 2.7270 99.80% 0.2178 0.1547 4.7902 83.40% 0.2177 0.2209 0.4051 0.4074 0.4149
    λ 2.2657 99.80% 0.1677 0.1597 3.6258 85.00% 0.1658 0.1653 0.4314 0.4325 0.4332
    β 0.9371 97.80% 0.0391 0.0376 0.7662 92.40% 0.0343 0.0337 0.4059 0.4136 0.4299
    R2,5 0.0425 95.00% 0.0023 0.0023 0.0594 90.60% 0.0025 0.0025 0.0098 0.0097 0.0097
    R3,5 0.1147 95.40% 0.0055 0.0055 0.1384 89.40% 0.0063 0.0063 0.0368 0.0370 0.0370
    R4,5 0.2155 95.80% 0.0112 0.0112 0.2317 88.20% 0.0105 0.0111 0.0915 0.0917 0.0923

     | Show Table
    DownLoad: CSV
    Table 6.  Length of CI for MLE, MPS, Bayesian when α1=1.5,λ=0.5,α2=0.5,β=1.2.
    α1=1.5,λ=0.5,α2=0.5,β=1.2
    MLE MPS Bayesian
    S mi L.CI CP BP BT L.CI CP BP BT L.CI L.CI L.CI
    I 7, 10, 10, 8, 9 α1 0.8685 97.00% 0.0513 0.0483 1.0049 96.80% 0.0474 0.0472 0.4759 0.4795 0.4832
    α2 0.2534 95.60% 0.0152 0.0151 0.3274 94.40% 0.0175 0.0172 0.4061 0.4087 0.4093
    λ 0.8133 97.00% 0.0414 0.0375 1.0776 96.80% 0.0464 0.0448 0.4796 0.4816 0.4864
    β 1.7576 95.40% 0.0793 0.0817 1.2285 79.00% 0.0572 0.0568 0.2906 0.2879 0.2784
    R2,5 0.2572 94.40% 0.0119 0.0119 0.2508 93.00% 0.0110 0.0110 0.2408 0.2415 0.2437
    R3,5 0.1917 94.40% 0.0087 0.0087 0.1901 92.60% 0.0088 0.0088 0.1733 0.1738 0.1753
    R4,5 0.1260 94.40% 0.0056 0.0060 0.1264 92.80% 0.0057 0.0055 0.1111 0.1114 0.1124
    9, 13, 13, 9, 11 α1 1.0810 95.80% 0.0410 0.0393 1.0413 95.60% 0.0454 0.0457 0.4103 0.4083 0.4030
    α2 0.3260 95.40% 0.0113 0.0109 0.3787 94.40% 0.0138 0.0140 0.3403 0.3401 0.3385
    λ 0.6944 95.20% 0.0300 0.0320 0.7712 94.40% 0.0326 0.0318 0.4114 0.4161 0.4177
    β 1.2031 96.40% 0.0539 0.0535 0.8853 75.80% 0.0396 0.0396 0.2294 0.2241 0.2149
    R2,5 0.2229 95.00% 0.0098 0.0099 0.2189 92.40% 0.0100 0.0102 0.1847 0.1849 0.1865
    R3,5 0.1633 95.20% 0.0072 0.0071 0.1635 92.20% 0.0070 0.0072 0.1352 0.1353 0.1362
    R4,5 0.1060 95.20% 0.0049 0.0050 0.1075 92.20% 0.0045 0.0044 0.0876 0.0877 0.0882
    II 7, 10, 10, 8, 9 α1 3.4992 99.80% 0.4719 0.4704 3.2811 99.80% 0.3482 0.3690 0.4743 0.4728 0.4708
    α2 0.5714 99.80% 0.3131 0.3061 0.5218 99.80% 0.2913 0.2534 0.3966 0.3918 0.3849
    λ 1.3347 99.80% 0.5013 0.5011 1.2631 99.80% 0.2216 0.2159 0.5294 0.5364 0.5405
    β 0.7185 95.40% 0.0579 0.0584 0.9483 87.80% 0.0436 0.0433 0.2862 0.2773 0.2519
    R2,5 0.3899 94.00% 0.0146 0.0144 0.2950 90.00% 0.0134 0.0133 0.2034 0.2028 0.2016
    R3,5 0.2852 94.80% 0.0103 0.0102 0.2243 90.40% 0.0099 0.0099 0.1522 0.1517 0.1502
    R4,5 0.1849 95.00% 0.0064 0.0064 0.1496 90.80% 0.0068 0.0068 0.1002 0.0999 0.0985
    9, 13, 13, 9, 11 α1 3.0538 99.80% 0.4511 0.4531 4.2752 94.30% 0.1875 0.1868 0.3560 0.3570 0.3513
    α2 0.4794 99.80% 0.2197 0.2018 1.6189 93.90% 0.0797 0.0724 0.3448 0.3429 0.3385
    λ 1.1899 99.80% 0.3827 0.3715 3.0569 94.10% 0.1452 0.1393 0.4109 0.4145 0.4219
    β 0.6036 95.40% 0.0361 0.0361 0.7104 82.60% 0.0355 0.0352 0.2332 0.2267 0.2152
    R2,5 0.2831 93.80% 0.0123 0.0121 0.2648 89.60% 0.0121 0.0120 0.1758 0.1750 0.1733
    R3,5 0.2064 94.00% 0.0094 0.0093 0.2002 89.80% 0.0091 0.0090 0.1307 0.1300 0.1284
    R4,5 0.1337 94.00% 0.0064 0.0064 0.1328 90.00% 0.0060 0.0062 0.0857 0.0852 0.0838

     | Show Table
    DownLoad: CSV

    Numerical simulations, on the other hand, make it impossible to see in a basic sense how estimated dangers decrease with sample size. For probability, product spacing, and Bayesian estimates, we see this trend. In terms of estimated risks, the Bayesian estimates of Rs;k perform significantly better than the MLE and MPS. We notice that the Bayesian estimate's predicted risks under the LINEX loss are often lower than those under the SELF. The Bayes estimates and their estimated risks are sometimes near to each other based on calculated findings. The average length of HPD intervals is found to be shorter than that of asymptotic confidence intervals. When the sample size is increased, the lengths of both intervals shrink. But, the bootstrap CI has the shortest length of CI.

    For demonstration purposes, the analysis of a pair of real data sets is shown. The idea is to figure out how we can create conditions in which there is a lot of droughts. We assert that there will be no excessive drought if the water capacity of a reservoir in an area in August for at least two years out of the next five years is more than the amount of water achieved in December of 2019. It's also feasible that in this case, rather than entire samples from both groups, one sees censored samples. To achieve this purpose, we used the monthly water capacity of the Shasta reservoir in California, as well as the months of August and December from 1975 to 2016. http://cdec.water.ca.gov/cgi-progs/queryMonthly/SHA contains the data. Some writers have previously used these data, including Nadar and Kizilaslan [46], and Kızılaslan and Nadar [47].

    In the whole data case, assuming k = 5 and s = 2, Y1 represents December 1975 capacity while X11,...,X15 represents August capacities from 1976 to 1980. Also, Y2 represents December 1981 capacity, while X21,...,X25 represents August capabilities from 1982 to 1986. n = 7 data for Y are acquired by continuing this approach till 2016. The corrected data are listed in Table 7.

    Table 7.  Transformed data.
    X1 X2 X3 X4 X5 y
    0.287785 0.126977 0.768563 0.703119 0.729986 0.667157
    0.811159 0.829569 0.726164 0.423813 0.715158 0.767135
    0.363359 0.463726 0.371904 0.291172 0.414087 0.640395
    0.538082 0.744881 0.722613 0.561238 0.813964 0.650691
    0.668612 0.524947 0.605979 0.71585 0.529518 0.709025
    0.742025 0.468782 0.345075 0.425334 0.76707 0.82486
    0.613911 0.461618 0.294834 0.392917 0.6881 0.679829

     | Show Table
    DownLoad: CSV

    To begin, we make sure that the POLO distribution can be utilized to examine the data set in Table 7. We get the MLEs of unknown parameters in Table 8. The Kolmogorov-Smirnov distance (KSD) values are also reported, along with the appropriate p-values. We can see from this table that the POLO distribution fits the data pretty well. For the data sets X and Y, the empirical cdf distribution, estimated pdf with histogram plot, and the PP-plot are given in Figures 13, respectively. In Figure 4, we plot the empirical cdf distribution, estimated pdf with histogram plot, and the PP-plot for X=(x1,x2,x3,x4,x5). These figures confirmed that the data have been fitted for POLO distribution.

    Table 8.  MLE and KS-test.
    X1 X2 X3 X4 X5 y
    α 19.6968 17.8297 16.7605 12.2651 16.1808 1.1683
    λ 3.3794 3.9210 3.0245 1.2313 1.5485 0.0030
    β 3.8742 2.7228 3.4386 3.8406 6.9615 16.7439
    KSD 0.1770 0.2777 0.2607 0.2635 0.2449 0.1849
    P-value 0.9538 0.5602 0.6382 0.6252 0.7122 0.9366

     | Show Table
    DownLoad: CSV
    Figure 1.  Estimated cdf for each variable of data.
    Figure 2.  Estimated pdf for each variable of data.
    Figure 3.  PP-plot for each variable of data.
    Figure 4.  Estimated cdf, pdf and PP-plot for X variable.

    Two distinct progressively censored samples are produced from the previous data sets for illustration purposes. In Table 9, the MLEs, MPS, and the Bayesian estimation of unknown parameters for the model have been obtained for this scheme. In Table 10, the estimation of reliability in a multi-component stress-strength model have been obtained. Figure 5 shows the trace and density plots for all parameters in the MCMC trace. Also it shows the trace and density plots for all parameters in an MCMC trace. The posterior density of MCMC results for each parameter is shown, which demonstrates a symmetric normal distribution that is identical to the proposed distribution. The convergence of the MCMC results is confirmed in Figure 5.

    Table 9.  MLE, MPS, and Bayesian estimation with different loss functions.
    Non-Bayesian Bayesian
    MLE MPS SELF c=0.5 c=1.5
    scheme estimates SE estimates SE estimates SE estimates estimates
    Complelet α1 138.6205 399.0043 29.6983 55.2087 141.4085 21.7230 96.7112 87.4652
    α2 85.6020 247.8306 19.7213 37.1186 88.6028 18.9293 42.9141 32.5260
    λ 22.3500 65.0217 7.7696 15.0518 22.6133 3.9472 19.4669 16.1599
    β 3.9753 0.5159 2.9737 0.4364 4.0554 0.5051 3.9927 3.8738
    I α1 80.9614 329.1084 24.4695 69.4420 83.2840 18.8717 48.1163 37.1532
    α2 66.4676 271.2264 19.6858 54.3504 69.4033 19.5470 37.0415 27.4366
    λ 27.2614 111.9030 12.9688 37.3678 27.7317 4.1830 24.1218 19.9190
    β 3.6505 0.5967 2.5016 0.4731 3.7391 0.5966 3.6531 3.4972
    II α1 146.1876 453.5932 37.4513 87.7195 146.5873 23.0879 98.5567 87.8391
    α2 68.5133 214.2732 19.9103 47.2777 72.6906 18.7809 39.4898 30.9711
    λ 22.5976 70.8227 10.7927 25.9588 23.1222 4.2110 19.8643 16.9361
    β 4.2557 0.6274 3.0077 0.5107 4.2941 0.5640 4.2159 4.0674

     | Show Table
    DownLoad: CSV
    Table 10.  Estimation of reliability in a multicomponent stress-strength model.
    Non-Bayesian Bayesian
    scheme mle MPS SELF c=0.5 c=1.5
    Complelet R1,5 0.6980 0.7203 0.7025 0.5917 0.5343
    R2,5 0.5114 0.5346 0.5161 0.4105 0.3611
    R3,5 0.3606 0.3801 0.3644 0.2798 0.2423
    R4,5 0.2290 0.2429 0.2317 0.1732 0.1484
    I R1,5 0.7821 0.7765 0.7863 0.7641 0.7520
    R2,5 0.6033 0.5967 0.6081 0.5824 0.5689
    R3,5 0.4404 0.4345 0.4448 0.4217 0.4098
    R4,5 0.2873 0.2828 0.2906 0.2733 0.2645
    II R1,5 0.6095 0.6504 0.6279 0.5585 0.5172
    R2,5 0.4265 0.4646 0.4434 0.3815 0.3470
    R3,5 0.2922 0.3223 0.3054 0.2576 0.2318
    R4,5 0.1816 0.2021 0.1906 0.1585 0.1415

     | Show Table
    DownLoad: CSV
    Figure 5.  Convergence plots of MCMC for parameter estimates of the POLO base on multicomponent stress-strength model.

    When the two schemes are compared using Bayesian and non-Bayesian, it is found that estimators in scheme 1 have lower standard errors than estimators in scheme 2. Also, it is found that estimators in scheme 1 have high reliability than estimators in scheme 2 and the whole scheme.

    The study discusses the multi-component stress-strength model. The reliability has been investigated where both the stress and strength variables follow the POLO distribution. To calculate the multi-component stress-strength reliability Rs;k, we apply both classical and Bayesian methods. We compute the Bayesian estimates of parameters and Rs;k under symmetric and asymmetric loss functions by using the MH algorithm. We compute the classical estimates as MLE ad MPS of the parameter of the model and Rs;k by using the Newton-Raphson (NR) and Markov Chain Monte Carlo algorithms. Based on the simulation analysis, we observe that the predicted risks of the proposed Rs;k estimators show good behavior when the sample size increases. In general, as the sample size increases, the average length of the intervals decreases, therefor The average length of higher posterior density intervals is found to be shorter than that of asymptotic confidence intervals. Based on tabulated numerical results we find that the predicted risks of Bayesian estimation are often lower than the risks of the classical approaches. For illustrating the applicability of the multi-component stress strength model under POLO distribution we have examined a real life data taken from the monthly water capacity of the Shasta reservoir in California, and by using Kolmogorov-Smirnov distance (KSD)and its corresponding p-values we conclude that the above model fit the data very well. This study can further be extended by using different censoring schemes and different lifetime models.

    This work was supported by the Deanship of Scientific Research, Vice Presidency for Graduate Studies and Scientific Research, King Faisal University, Saudi Arabia (Project No. GRANT421), King Faisal University (KFU), Al Ahsa, Saudi Arabia. The Authors, therefore, acknowledge technical and financial support of the Deanship of Scientific Research at KFU.

    The authors declare no conflict of interest.



    [1] R. A. Horn, C. R. Johnson, Matrix analysis, Cambridge: Cambridge University Press, 2012. https://doi.org/10.1017/CBO9780511810817
    [2] K. Zhou, J. C. Doyle, K. Glover, Robust and optimal control, Upper Saddle River: Prentice Hall, 1996. Available from: https://dl.acm.org/doi/book/10.5555/225507
    [3] V. Simoncini, Computational methods for linear matrix equations, SIAM Rev., 58 (2016), 377–441. https://doi.org/10.1137/130912839 doi: 10.1137/130912839
    [4] V. L. Syrmos, F. L. Lewis, Coupled and constrained Sylvester equations in system design, Circuits Syst. Signal Process., 13 (1994), 663–694. https://doi.org/10.1007/BF02523122 doi: 10.1007/BF02523122
    [5] K. R. Gavin, S. P. Bhattacharyya, Robust and well-conditioned eigenstructure assignment via Sylvester's equation, Proc. Amer. Control Conf., 1982. https://doi.org/10.1002/oca.4660040302
    [6] M. Darouach, Solution to Sylvester equation associated to linear descriptor systems, Syst. Control. Lett., 55 (2006), 835–838. https://doi.org/10.1016/j.sysconle.2006.04.004 doi: 10.1016/j.sysconle.2006.04.004
    [7] G. H. Golub, C. F. V. Loan, Matrix computations, Baltimore: Johns Hopkins University Press, 2013. Available from: https://epubs.siam.org/doi/book/10.1137/1.9781421407944
    [8] K. Zuo, Y. Chen, L. Yuan, Further representations and computations of the generalized Moore-Penrose inverse, AIMS Math., 8 (2023), 23442–23458. https://doi.org/10.3934/math.20231191 doi: 10.3934/math.20231191
    [9] W. R. Hamilton, On quaternions, or on a new system of imaginaries in algebra, Philos. Mag., 25 (1844), 489–495. https://doi.org/10.1080/14786444408645047 doi: 10.1080/14786444408645047
    [10] S. D. Leo, G. Scolarici, Right eigenvalue equation in quaternionic quantum mechanics, J. Phys. A, 33 (2000), 2971–2995. http://doi.org/10.1088/0305-4470/33/15/306 doi: 10.1088/0305-4470/33/15/306
    [11] C. C. Took, D. P. Mandic, Augmented second-order statistics of quaternion random signals, Signal Process., 91 (2011), 214–224. https://doi.org/10.1016/j.sigpro.2010.06.024 doi: 10.1016/j.sigpro.2010.06.024
    [12] S. L. Adler, Quaternionic quantum mechanics and quantum fields, New York: Oxford University Press, 1995. Available from: https://onlinelibrary.wiley.com/doi/abs/10.1002/qua.560600402
    [13] J. B. Kuipers, Quaternions and rotation sequences, Princeton: Princeton University Press, 1999.
    [14] A. Rehman, I. I. Kyrchei, I. Ali, M. Akram, A. Shakoor, The general solution of quaternion matrix equation having η-skew-Hermicity and its Cramer's rule, Math. Probl. Eng., 2019 (2019), 7939238. https://doi.org/10.1155/2019/7939238 doi: 10.1155/2019/7939238
    [15] A. Rehman, I. I. Kyrchei, I. Ali, M. Akram, A. Shakoor, Explicit formulas and determinantal representation for η-skew-Hermitian solution to a system of quaternion matrix equations, Filomat, 34 (2020), 2601–2627. https://doi.org/10.2298/FIL2008601R doi: 10.2298/FIL2008601R
    [16] A. Rehman, I. I. Kyrchei, Solving and algorithm to system of quaternion Sylvester-Type matrix equations with -hermicity, Adv. Appl. Clifford Algebras, 32 (2022), 49. https://doi.org/10.1007/s00006-022-01222-2 doi: 10.1007/s00006-022-01222-2
    [17] Z. H. He, Q. W. Wang, Y. Zhang, A simultaneous decomposition for seven matrices with applications, J. Comput. Appl. Math., 349 (2019), 93–113. https://doi.org/10.1016/j.cam.2018.09.001 doi: 10.1016/j.cam.2018.09.001
    [18] S. W. Yu, Z. H. He, T. C. Qi, X. X. Wang, The equivalence canonical form of five quaternion matrices with applications to imaging and Sylvester-type equations, J. Comput. Appl. Math., 393 (2021), 113494. https://doi.org/10.1016/j.cam.2021.113494 doi: 10.1016/j.cam.2021.113494
    [19] E. K. W. Chu, L. Hou, D. B. Szyld, J. Zhou, Numerical solution of singular Sylvester equations, J. Comput. Appl. Math., 436 (2024), 115426. https://doi.org/10.1016/j.cam.2023.115426 doi: 10.1016/j.cam.2023.115426
    [20] X. Shao, Y. Wei, E. K. Chu, Numerical solutions of quaternionic Riccati equations, J. Appl. Math. Comput., 69 (2023), 2617–2639. https://doi.org/10.1007/s12190-023-01848-w doi: 10.1007/s12190-023-01848-w
    [21] L. S. Liu, S. Zhang, A coupled quaternion matrix equations with applications, J. Appl. Math. Comput., 69 (2023), 4069–4089. https://doi.org/10.1007/s12190-023-01916-1 doi: 10.1007/s12190-023-01916-1
    [22] Z. H. He, Some new results on a system of Sylvester-type quaternion matrix equations, Lin. Multilin. Algebra, 69 (2021), 3069–3091. https://doi.org/10.1080/03081087.2019.1704213 doi: 10.1080/03081087.2019.1704213
    [23] Z. H. He, X. X. Wang, Y. F. Zhao, Eigenvalues of quaternion tensors with applications to color video processing, J. Sci. Comput., 94 (2023). https://doi.org/10.1007/s10915-022-02058-5
    [24] Z. H. He, C. Navasca, X. X. Wang, Decomposition for a quaternion tensor triplet with applications, Adv. Appl. Clifford Algebras, 32 (2022), 9. https://doi.org/10.1007/s00006-021-01195-8 doi: 10.1007/s00006-021-01195-8
    [25] S. B. Aoun, N. Derbel, H. Jerbi, T. E. Simos, S. D. Mourtas, V. N. Katsikis, A quaternion Sylvester equation solver through noise-resilient zeroing neural networks with application to control the SFM chaotic system, AIMS Math., 8 (2023), 27376–27395. Available from: https://www.aimspress.com/article/doi/10.3934/math.20231401
    [26] M. Liu, H. Wu, Y. Shi, L. Jin, High-order robust discrete-time neural dynamics for time-varying multi-linear tensor equation with M-tensor, IEEE Trans. Ind. Inform., 9 (2023), 9457–9467. http://dx.doi.org/ 10.1109/TII.2022.3228394 doi: 10.1109/TII.2022.3228394
    [27] J. Respondek, Matrix black box algorithms-a survey, Bull. Pol. Acad. Sci. Tech. Sci., 2022, e140535. https://dx.doi.org/10.24425/bpasts.2022.140535
    [28] I. I. Kyrchei, Cramer's rule for quaternionic systems of linear equations, J. Math. Sci., 155 (2008), 839–858. https://doi.org/10.1007/s10958-008-9245-6 doi: 10.1007/s10958-008-9245-6
    [29] I. I. Kyrchei, The theory of the column and row determinants in a quaternion linear algebra, Adv. Math. Resear., 15 (2012), 301–359. Available from: https://www.elibrary.ru/item.asp?id=29685532
    [30] I. I. Kyrchei, Determinantal representations of the quaternion weighted Moore-Penrose inverse and its applications, Adv. Math. Resear., 23 (2017), 35–96. Available from: https://www.elibrary.ru/item.asp?id=35708733
    [31] I. I. Kyrchei, Determinantal representations of the Drazin and W-weighted Drazin inverses over the quaternion skew field with applications, Quater. Theory Appl., 2017,201–275. Available from: https://www.elibrary.ru/item.asp?id = 38610582
    [32] I. I. Kyrchei, Cramer's Rules of η-(skew-)Hermitian solutions to the quaternion Sylvester-type matrix equations, Adv. Appl. Clifford Algebras, 29 (2019), 56. https://doi.org/10.1007/s00006-019-0972-1 doi: 10.1007/s00006-019-0972-1
    [33] I. I. Kyrchei, Determinantal representations of solutions to systems of two-sided quaternion matrix equations, Lin. Multilin. Algebra, 69 (2021), 648–672. https://doi.org/10.1080/03081087.2019.1614517 doi: 10.1080/03081087.2019.1614517
    [34] I. I. Kyrchei, Determinantal representations of general and (skew-)Hermitian solutions to the generalized Sylvester-type quaternion matrix equation, Abstr. Appl. Anal., 2019 (2019), 5926832. https://doi.org/10.1155/2019/5926832 doi: 10.1155/2019/5926832
    [35] O. Alshammari, M. Kchaou, H. Jerbi, S. B. Aoun, V. Leiva, A fuzzy design for a sliding mode observer-based control scheme of Takagi-Sugeno Markov jump systems under imperfect premise matching with bio-economic and industrial applications, Mathematics, 10 (2022), 3309. https://doi.org/10.3390/math10183309 doi: 10.3390/math10183309
    [36] P. B. Dhandapani, J. Thippan, C. Martin-Barreiro, V. Leiva, C. Chesneau, Numerical solutions of a differential system considering a pure hybrid fuzzy neutral delay theory, Electronics, 11 (2022), 1478. https://doi.org/10.3390/electronics11091478 doi: 10.3390/electronics11091478
    [37] M. A. Akbar, V. Leiva, A new taxonomy of global software development best practices using prioritization based on a fuzzy system, J. Softw. Evol. Proc., 36 (2024). https://doi.org/10.1002/smr.2629
    [38] R. G. Aykroyd, V. Leiva, F. Ruggeri, Recent developments of control charts, identification of big data sources and future trends of current research, Technol. Forecast. Soc. Change, 144 (2019), 221–232. https://doi.org/10.1016/j.techfore.2019.01.005 doi: 10.1016/j.techfore.2019.01.005
    [39] A. Ghaffar, M. Z. Rahman, V. Leiva, C. Martin-Barreiro, X. Cabezas, C. Castro, Efficiency, optimality, and selection in a rigid actuation system with matching capabilities for an assistive robotic exoskeleton, Eng. Sci. Technol., 51 (2024), 101613. https://doi.org/10.1016/j.jestch.2023.101613 doi: 10.1016/j.jestch.2023.101613
    [40] A. Rehman, Q. W. Wang, Z. H. He, Solution to a system of real quaternion matrix equations encompassing η-Hermicity, Appl. Math. Comput., 265 (2015), 945–957. https://doi.org/10.1016/j.amc.2015.05.104 doi: 10.1016/j.amc.2015.05.104
    [41] A. Rehman, Q. W. Wang, I. Ali, M. Akram, M. O. Ahmad, A constraint system of generalized Sylvester quaternion matrix equations, Adv. Appl. Clifford Algebr., 3 (2017), 3183–3196. https://doi.org/10.1007/s00006-017-0803-1 doi: 10.1007/s00006-017-0803-1
    [42] A. Rehman, I. I. Kyrchei, I. Ali, M. Akram, A. Shakoor, Constraint solution of a classical system of quaternion matrix equations and its Cramer's rule, Iran J. Sci. Technol. Trans. Sci., 45 (2021), 1015–1024. https://doi.org/10.1007/s40995-021-01083-7 doi: 10.1007/s40995-021-01083-7
    [43] Z. Z. Bai, On Hermitian and skew-Hermitian splitting iteration methods for continuous Sylvester equations, J. Comput. Math., 29 (2011), 185–198. https://dx.doi.org/10.4208/jcm.1009-m3152 doi: 10.4208/jcm.1009-m3152
    [44] J. K. Baksalary, R. Kala, The matrix equation AXYB=C, Linear Algebra Appl., 25 (1979), 41–43. https://doi.org/10.1016/0024-3795(79)90004-1 doi: 10.1016/0024-3795(79)90004-1
    [45] W. E. Roth, The equations AXYB=C and AXXB=C in matrices, Proc. Amer. Math. Soc., 3 (1952), 392–396. https://doi.org/10.2307/2031890 doi: 10.2307/2031890
    [46] L. Wang, Q. W. Wang, Z. H. He, The common solution of some matrix equations, Algebra Coll., 23 (2016), 71–81. https://doi.org/10.1142/S1005386716000092 doi: 10.1142/S1005386716000092
    [47] Q. W. Wang, Z. H. He, Solvability conditions and general solution for the mixed Sylvester equations, Automatica, 49 (2013), 2713–2719. https://doi.org/10.1016/j.automatica.2013.06.009 doi: 10.1016/j.automatica.2013.06.009
    [48] S. G. Lee, Q. P. Vu, Simultaneous solutions of matrix equations and simultaneous equivalence of matrices, Lin. Alg. Appl., 437 (2012), 2325–2339. https://doi.org/10.1016/j.laa.2012.06.004 doi: 10.1016/j.laa.2012.06.004
    [49] Y. Q. Lin, Y. M. Wei, Condition numbers of the generalized Sylvester equation, IEEE Trans. Automat. Control, 52 (2007), 2380–2385. http://doi.org/10.1109/TAC.2007.910727 doi: 10.1109/TAC.2007.910727
    [50] X. Zhang, A system of generalized Sylvester quaternion matrix equations and its applications, Appl. Math. Comput., 273 (2016), 74–81. https://doi.org/10.1016/j.amc.2015.09.074 doi: 10.1016/j.amc.2015.09.074
    [51] Z. H. He, Q. W. Wang, A pair of mixed generalized Sylvester matrix equations, J. Shanghai Univ. Nat. Sci., 20 (2014), 138–156. http://doi.org/10.3969/j.issn.1007-2861.2014.01.021 doi: 10.3969/j.issn.1007-2861.2014.01.021
    [52] Q. W. Wang, A. Rehman, Z. H. He, Y. Zhang, Constraint generalized Sylvester matrix equations, Automatica, 69 (2016), 60–64. https://doi.org/10.1016/j.automatica.2016.02.024 doi: 10.1016/j.automatica.2016.02.024
    [53] F. O. Farid, Z. H. He, Q. W. Wang, The consistency and the exact solutions to a system of matrix equations, Lin. Multilin. Algebra, 64 (2016), 2133–2158. https://doi.org/10.1080/03081087.2016.1140717 doi: 10.1080/03081087.2016.1140717
    [54] Z. H. He, Q. W. Wang, A system of periodic discrete-time coupled Sylvester quaternion matrix equations, Algebra Coll., 24 (2017), 169–180. https://doi.org/10.1142/S1005386717000104 doi: 10.1142/S1005386717000104
    [55] X. Liu, Z. H. He, η-Hermitian solution to a system of quaternion matrix equations, Bull. Malaysian Math. Sci. Soc., 43 (2020), 4007–4027. https://doi.org/10.1007/s40840-020-00907-w doi: 10.1007/s40840-020-00907-w
    [56] Q. W. Wang, Z. H. He, Systems of coupled generalized Sylvester matrix equations, Automatica, 50 (2014), 2840–2844. https://doi.org/10.1016/j.automatica.2014.10.033 doi: 10.1016/j.automatica.2014.10.033
    [57] Z. H. He, A system of coupled quaternion matrix equations with seven unknowns and its applications, Adv. Appl. Clifford Algebras, 29 (2019), 38. https://doi.org/10.1007/s00006-019-0955-2 doi: 10.1007/s00006-019-0955-2
    [58] V. L. Syrmos, F. L. Lewis, Output feedback eigenstructure assignment using two Sylvester equations, IEEE Trans. Autom. Cont., 38 (1993), 495–499. http://doi.org/10.1109/9.210155 doi: 10.1109/9.210155
    [59] R. C. Li, A bound on the solution to a structured Sylvester equation with an application to relative perturbation theory, SIAM J. Matrix Anal. Appl., 21 (1999), 440–445. https://doi.org/10.1137/S0895479898349586 doi: 10.1137/S0895479898349586
    [60] G. Marsaglia, G. P. H. Styan, Equalities and inequalities for ranks of matrices, Lin. Multilin. Algebra, 2 (1974), 269–292. https://doi.org/10.1080/03081087408817070 doi: 10.1080/03081087408817070
    [61] Q. W. Wang, Z. C. Wu, C. Y. Lin, Extremal ranks of a quaternion matrix expression subject to consistent systems of quaternion matrix equations with applications, Appl. Math. Comput., 182 (2006), 1755–1764. https://doi.org/10.1016/j.amc.2006.06.012 doi: 10.1016/j.amc.2006.06.012
    [62] Z. H. He, Q. W. Wang, The general solutions to some systems of matrix equations, Lin. Multilin. Algebra, 63 (2015), 2017–2032. https://doi.org/10.1080/03081087.2014.896361 doi: 10.1080/03081087.2014.896361
    [63] I. I. Kyrchei, Determinantal representations of the Moore-Penrose inverse over the quaternion skew field and corresponding Cramer's rules, Lin. Multilin. Algebra, 59 (2011), 413–431. https://doi.org/10.1080/03081081003586860 doi: 10.1080/03081081003586860
    [64] Y. Zhang, J. Zhang, J. Weng, Dynamic Moore-Penrose inversion with unknown derivatives: Gradient neural network approach, IEEE Trans. Neur. Net. Learn. Syst., 34 (2023), 10919–10929. http://doi.org/10.1109/TNNLS.2022.3171715 doi: 10.1109/TNNLS.2022.3171715
    [65] Y. Zhang, Improved GNN method with finite-time convergence for time-varying Lyapunov equation, Inform. Sci., 611 (2022), 494–503. https://doi.org/10.1016/j.ins.2022.08.061 doi: 10.1016/j.ins.2022.08.061
  • This article has been cited by:

    1. Manal M. Yousef, Amal S. Hassan, Huda M. Alshanbari, Abdal-Aziz H. El-Bagoury, Ehab M. Almetwally, Bayesian and Non-Bayesian Analysis of Exponentiated Exponential Stress–Strength Model Based on Generalized Progressive Hybrid Censoring Process, 2022, 11, 2075-1680, 455, 10.3390/axioms11090455
    2. Amani Alrumayh, Wajaree Weera, Hazar A. Khogeer, Ehab M. Almetwally, Optimal analysis of adaptive type-II progressive censored for new unit-lindley model, 2023, 35, 10183647, 102462, 10.1016/j.jksus.2022.102462
    3. Yuge Du, Chunmei Zhang, Wenhao Gui, Accelerated life test for Pareto distribution under progressive type-II censored competing risks data with binomial removals and its application in electrode insulation system, 2023, 0361-0918, 1, 10.1080/03610918.2023.2175868
    4. Manal M. Yousef, Amal S. Hassan, Abdullah H. Al-Nefaie, Ehab M. Almetwally, Hisham M. Almongy, Bayesian Estimation Using MCMC Method of System Reliability for Inverted Topp–Leone Distribution Based on Ranked Set Sampling, 2022, 10, 2227-7390, 3122, 10.3390/math10173122
    5. Naif Alotaibi, Ibrahim Elbatal, Ehab M. Almetwally, Salem A. Alyami, A. S. Al-Moisheer, Mohammed Elgarhy, Bivariate Step-Stress Accelerated Life Tests for the Kavya–Manoharan Exponentiated Weibull Model under Progressive Censoring with Applications, 2022, 14, 2073-8994, 1791, 10.3390/sym14091791
    6. Hanan Haj Ahmad, Dina A. Ramadan, Mahmoud M. M. Mansour, Mohamed S. Aboshady, The Reliability of Stored Water behind Dams Using the Multi-Component Stress-Strength System, 2023, 15, 2073-8994, 766, 10.3390/sym15030766
    7. Nora Nader, Dina A. Ramadan, Hanan Haj Ahmad, M. A. El-Damcese, B. S. El-Desouky, Optimizing analgesic pain relief time analysis through Bayesian and non-Bayesian approaches to new right truncated Fréchet-inverted Weibull distribution, 2023, 8, 2473-6988, 31217, 10.3934/math.20231598
    8. Sunita Sharma, Vinod Kumar, Reliability estimation in multicomponent stress-strength model using weighted exponential-Lindley distribution, 2024, 94, 0094-9655, 2385, 10.1080/00949655.2024.2337341
    9. Najwan Alsadat, Amal S. Hassan, Mohammed Elgarhy, Mustapha Muhammad, Ehab M. Almetwally, Reliability inference of a multicomponent stress-strength model for exponentiated Pareto distribution based on progressive first failure censored samples, 2024, 17, 16878507, 101122, 10.1016/j.jrras.2024.101122
    10. O.M. Khaled, H.M. Barakat, Laila A. AL-Essa, Ehab M. Almetwally, Physics and economic applications by progressive censoring and bootstrapping sampling for extension of power Topp-Leone model, 2024, 17, 16878507, 100898, 10.1016/j.jrras.2024.100898
    11. Xue Hu, Haiping Ren, Statistical inference of the stress-strength reliability for inverse Weibull distribution under an adaptive progressive type-Ⅱ censored sample, 2023, 8, 2473-6988, 28465, 10.3934/math.20231457
    12. Dina A. Ramadan, Ehab M. Almetwally, Ahlam H. Tolba, Statistical inference for multi stress–strength reliability based on progressive first failure with lifetime inverse Lomax distribution and analysis of transformer insulation data, 2023, 39, 0748-8017, 2558, 10.1002/qre.3362
    13. Mohammad Abiad, Najwan Alsadat, Meraou M. A, M.M.Abd El-Raouf, Haitham M. Yousof, Anoop Kumar, Different copula types and reliability applications for a new fisk probability model, 2025, 110, 11100168, 512, 10.1016/j.aej.2024.09.024
    14. Abdelhamid Boujarif, David W. Coit, Oualid Jouini, Zhiguo Zeng, Robert Heidsieck, Repairing smarter: Opportunistic maintenance for a closed-loop supply chain with spare parts dependency, 2024, 09518320, 110642, 10.1016/j.ress.2024.110642
    15. Aisha Fayomi, Amal S. Hassan, Ehab M. Almetwally, Reliability Inference for Multicomponent Systems Based on the Inverted Exponentiated Pareto Distribution and Progressive First Failure Censoring, 2025, 32, 1776-0852, 10.1007/s44198-024-00262-5
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1119) PDF downloads(132) Cited by(3)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog