Research article Special Issues

Krasnoselskii-type results for equiexpansive and equicontractive operators with application in radiative transfer equations

  • In this paper, we study the results of fixed points for the operator equations of type x=H(ϝx,x) using the idea of measure of noncompactness and assuming that the operator ϝ is k -set contractive (strictly k -set contractive, or a continuous) and the family {H(u,.):u} is equiexpansive or equicontractive. The obtained results are generalization of Krasnoselskii type fixed point results. Some examples are given to elaborate new concepts. We use the main result to find the existence of solutions for the stationary radiative transfer equation in a channel. We demonstrate our theory with an example by comparison of an approximate solution with the exact solution.

    Citation: Niaz Ahmad, Nayyar Mehmood, Thabet Abdeljawad, Aiman Mukheimer. Krasnoselskii-type results for equiexpansive and equicontractive operators with application in radiative transfer equations[J]. AIMS Mathematics, 2023, 8(6): 14592-14608. doi: 10.3934/math.2023746

    Related Papers:

    [1] Mustafa M. Hasaballah, Oluwafemi Samson Balogun, M. E. Bakr . Frequentist and Bayesian approach for the generalized logistic lifetime model with applications to air-conditioning system failure times under joint progressive censoring data. AIMS Mathematics, 2024, 9(10): 29346-29369. doi: 10.3934/math.20241422
    [2] Bing Long, Zaifu Jiang . Estimation and prediction for two-parameter Pareto distribution based on progressively double Type-II hybrid censored data. AIMS Mathematics, 2023, 8(7): 15332-15351. doi: 10.3934/math.2023784
    [3] Samah M. Ahmed, Abdelfattah Mustafa . Estimation of the coefficients of variation for inverse power Lomax distribution. AIMS Mathematics, 2024, 9(12): 33423-33441. doi: 10.3934/math.20241595
    [4] Magdy Nagy, Khalaf S. Sultan, Mahmoud H. Abu-Moussa . Analysis of the generalized progressive hybrid censoring from Burr Type-Ⅻ lifetime model. AIMS Mathematics, 2021, 6(9): 9675-9704. doi: 10.3934/math.2021564
    [5] Hassan Okasha, Mazen Nassar, Saeed A. Dobbah . E-Bayesian estimation of Burr Type XII model based on adaptive Type-Ⅱ progressive hybrid censored data. AIMS Mathematics, 2021, 6(4): 4173-4196. doi: 10.3934/math.2021247
    [6] Mohamed S. Eliwa, Essam A. Ahmed . Reliability analysis of constant partially accelerated life tests under progressive first failure type-II censored data from Lomax model: EM and MCMC algorithms. AIMS Mathematics, 2023, 8(1): 29-60. doi: 10.3934/math.2023002
    [7] Hanan Haj Ahmad, Ehab M. Almetwally, Dina A. Ramadan . A comparative inference on reliability estimation for a multi-component stress-strength model under power Lomax distribution with applications. AIMS Mathematics, 2022, 7(10): 18050-18079. doi: 10.3934/math.2022994
    [8] Hatim Solayman Migdadi, Nesreen M. Al-Olaimat, Maryam Mohiuddin, Omar Meqdadi . Statistical inference for the Power Rayleigh distribution based on adaptive progressive Type-II censored data. AIMS Mathematics, 2023, 8(10): 22553-22576. doi: 10.3934/math.20231149
    [9] Ahmed R. El-Saeed, Ahmed T. Ramadan, Najwan Alsadat, Hanan Alohali, Ahlam H. Tolba . Analysis of progressive Type-Ⅱ censoring schemes for generalized power unit half-logistic geometric distribution. AIMS Mathematics, 2023, 8(12): 30846-30874. doi: 10.3934/math.20231577
    [10] Haiping Ren, Xue Hu . Estimation for inverse Weibull distribution under progressive type-Ⅱ censoring scheme. AIMS Mathematics, 2023, 8(10): 22808-22829. doi: 10.3934/math.20231162
  • In this paper, we study the results of fixed points for the operator equations of type x=H(ϝx,x) using the idea of measure of noncompactness and assuming that the operator ϝ is k -set contractive (strictly k -set contractive, or a continuous) and the family {H(u,.):u} is equiexpansive or equicontractive. The obtained results are generalization of Krasnoselskii type fixed point results. Some examples are given to elaborate new concepts. We use the main result to find the existence of solutions for the stationary radiative transfer equation in a channel. We demonstrate our theory with an example by comparison of an approximate solution with the exact solution.



    Applying censoring techniques to a single population entails specific limitations. While progressive type-II censoring permits the exclusion of specific data, obtaining a sufficient number of observations remains a costly endeavor. Moreover, if our emphasis is on understanding the interactions and interdependencies among populations, experiments conducted solely on a single population may not provide conclusive evidence.

    The joint progressive type-II censoring scheme (JPT-II-CS) provides notable benefits for comparing the lifespan distributions of products produced by different units within the same facility. JPT-II-CS has attracted considerable attention in the research community, with numerous authors exploring JPT-II-CS and related inference methods in the literature. For instance, Rasouli and Balakrishnan [32], Doostparast et al. [18], Balakrishnan et al. [15], Mondal and Kundu [26], Krishna and Goel [24], Goel and Krishna [21], and Goel and Krishna [22] have contributed to this body of work.

    Recently, a multitude of researchers have investigated a range of strategies and different lifetime models in various applications. For more detailed information, please consult the publications by Pandey and Srivastava [29], Qiao and Gui [31], Abdel-Aty et al. [1], Ferreira and Silva [20], Celik and Guloksuz [16], Chiou and Chen [17], Panahi and Lone [28], Yan et al. [34], Asadi et al. [4,7,8,9,10,11,12].

    Within the JPT-II-CS framework, two samples, one from Population-A (Pop-A) and the other from Population-B (Pop-B), each comprising m and n units, are amalgamated for a life-testing experiment. Let k be the total number of observed failures in this experiment. Moreover, let R1,R2,...,Rk signify the number of units removed, such that ki=1(Ri+1)=m+n, where Ri=Si+Ti, and Ri is the sum of units removed from Pop-A (Si) and Pop-B (Ti) at the i-th stage. Considering the joint sample upon the first failure event denoted as W1, R1=S1+T1 units are randomly chosen from the remaining pool of m+n1 surviving units. Here S1 and T1 represent the units removed from Population (A) and Population (B) respectively. In a similar fashion, at the second stage, R2=S2+T2 units are randomly selected from the remaining pool of m+n2R1 surviving units and so forth. Finally, at the k-th failure, all the remaining Rk=n+mkki=1Ri surviving units are removed. It is crucial to highlight that when R1=R2=...=Rk=0, it implies that Si=Ti=0 for all i=1,2,...,k, simplifying the JPT-II-CS to the scenario of complete samples. Furthermore, if R1=R2=...=Rk1=0, resulting in Rk=n+mk, then the censoring scheme transforms back to a conventional Type-II censoring scheme for two distinct samples.

    In the context of the JPT-II-CS framework, the observed data comprises (W,Z,S), where W is depicted as (W1,W2,...,Wk), where k is a specified integer such that 1k<m+n. Z is denoted as (Z1,Z2,...,Zk), where

    Zi={1if  Wi   drawn from the X-sample,0 if  Wi   drawn from the Y-sample .

    Moreover, S is depicted as (S1,S2,...,Sk). We employ k1 to indicate the number of failed units from Pop-A, and k2 to signify the count of failed units from Pop-B, where k1=ki=1Zi and k2=kki=1Zi, respectively.

    The exponential distribution is renowned for its constant failure rate and memoryless property. However, when exploring phenomena in lifetime and reliability studies, opting for the exponential model may not be conducive, as it lacks the ability to exhibit both monotone (increasing and decreasing) and non-monotone (bathtub and upside-down bathtub) failure rate behaviors. To address this limitation and provide flexibility, alternative generalizations of the exponential distribution have been proposed, such as the Nadarajah–Haghighi distribution (NHD). Introduced by Nadarajah and Haghighi [27], this distribution serves as a widespread statistical extension of the conventional exponential distribution and has been recently named the NHD, abbreviating the authors' names. The cumulative distribution function (CDF) of the NHD is expressed as follows (see Figure 1):

    F(x)=1e[1(1+αx)γ],x>0,α,γ>0. (1.1)
    Figure 1.  PDF and CDF of the NHD for different values of parameters.

    Additionally, its corresponding probability density function (PDF) is given by:

    f(x)=αγ(1+αx)γ1e[1(1+αx)γ],x>0,α,γ>0. (1.2)

    In this context, γ and α denote the shape and scale parameters, respectively. By setting γ=1 for the variable in Eq (1.1), the inverted exponential distribution is presented as a particular case. Nadarajah and Haghighi [27] demonstrated that the density of the NHD may exhibit a decreasing trend. Furthermore, its shapes, both unimodal and the hazard rate function, can resemble the increasing, decreasing, or constant patterns observed in gamma, Weibull, and generalized-exponential distributions.

    Several researchers have delved into the estimation challenges associated with the NHD. For instance, Selim [33] investigated the estimation and prediction of the NHD based on record values. Inferences and ideal censoring techniques for the progressively first-failure censored NHD were covered by Ashour et al. [5]. Elshahhat et al. [19] used type-II adaptive progressive hybrid censoring with applications to investigate inferences for Nadarajah-Haghighighi parameters. The half logistic inverted NHD was studied by Alotaibi et al. [2] under ranked set Sampling with applications. With applications to COVID-19 mortality data and cancer data, Azimi and Esmailian [6] examined a novel generalisation of the NHD.

    In this publication, we tackle the challenges associated with estimating the NHD. We present both Bayesian and maximum likelihood estimators (MLEs) for the NHD using the JPT-II-CS, accompanied by their corresponding confidence intervals. We derive Bayes estimators for the squared error (SE) and linear exponential (LINEX) loss functions, assuming independent gamma priors. Monte Carlo simulations are conducted to assess the performance of various estimators, evaluating them based on mean squared error (MSE) and average values. Additionally, we scrutinize the average confidence lengths of the 95% two-sided interval estimations to gauge their effectiveness. Finally, we illustrate the practical application of our approach using real-world datasets.

    The following outlines the order of the remaining sections in the paper following this introduction: Section 2 examines the concept of sampling significance. Section 3 explains the process of obtaining maximum probability estimates for the unknown parameters. The construction of the Fisher information matrix (FIM) using MLEs is done in Section 4. In Section 5, Bayes estimation is performed with various loss functions using gamma and non-informative priors. Section 6 presents a Monte Carlo simulation analysis along with the results, serving as model validation. Two real data analyses to demonstrate each of the suggested methodologies are carried out in Section 7. Section 8 provides a concise summary of the main points.

    In a particular context, we possess independent and identically distributed (iid) lifetimes, denoted as X1,X2,...,Xm, originating from Pop-A. These lifetimes adhere to a NHD characterized by the CDF F(x) and the PDF f(x). In the same vein, we possess Y1,Y2,...,Yn as iid lifetimes from Pop-B, also adhering to a NHD defined by the CDF G(y) and the PDF g(y). Given a JPT-II-CS (m,n);R1,R2,...,Rk, we can define B=(W,Z,S) as the JPT-II-CS of size k obtained from Pop-A and Pop-B. It comprises (w1,z1,s1),(w2,z2,s2),...,(wk,zk,sk). The likelihood function based on the observed JPT-II-CS can be expressed as follows:

    L(α1,α2,γ1,γ2|data)=Cri=1[[f(wi)]zi[g(wi)]1zi][ˉF(wi)]si[ˉG(wi)]ti, (2.1)

    where w1w2wk, ˉF=1F, ˉG=1G, ki=1si+ki=1ti=ki=1Ri \text{and} C=B1B2 with

    B1=kj=1{(mj1i=1zij1i=1si)zj+(nj1i=1(1zi)j1i=1(Risi))(1zj)},
    B2=kj=1{(mj1i=1zij1i=1sisi)(nj1i=1(1zi)j1i=1(Risi)ti)(m+njj1i=1RiRi)}.

    MLE stands as a powerful statistical method widely employed in the field of data analysis and parameter estimation. In this manuscript, we employ MLE to estimate the parameters of the NHD. The main principle of MLE is to find the parameter values that maximize the likelihood function, which quantifies the probability of observing the given data under a specific set of parameter values. By maximizing the likelihood, we aim to identify the parameter values that make the observed data most plausible. The MLE estimates are obtained by solving an optimization problem, either analytically or numerically, to find the parameter values that maximize the likelihood function. These estimates are known to possess desirable properties, such as efficiency and consistency, which means that they converge to the true parameter values as the sample size increases. MLE is a versatile and widely applicable method that plays a crucial role in statistical inference and parameter estimation.

    The following outcome is obtained by applying the CDF and PDF from Eqs (1.1) and (1.2) to the likelihood equation given in (2.1):

    L(α1,α2,γ1,γ2|data)=αk11γk11αk22γk22e(γ11)ki=1ziln(1+α1wi)eki=1zi[1(1+α1wi)γ1]×e(γ21)ki=1(1zi)ln(1+α2wi)eki=1(1zi)[1(1+α2wi)γ2]×eki=1si[1(1+α1wi)γ1]eki=1ti[1(1+α2wi)γ2]. (3.1)

    Given that the log-likelihood function exhibits the same monotonic behavior as the likelihood function, it is expressed as follows:

    (α1,α2,γ1,γ2|data)=k1lnα1+k1lnγ1+k2lnα2+k2lnγ2+(γ11)ki=1ziln(1+α1wi)+ki=1zi[1(1+α1wi)γ1]+(γ21)ki=1(1zi)ln(1+α2wi)+ki=1(1zi)[1(1+α2wi)γ2]+ki=1si[1(1+α1wi)γ1]+ki=1ti[1(1+α2wi)γ2]. (3.2)

    By partially differentiating Eq (3.2) with respect to α1,α2,γ1andγ2 and setting the derivatives equal to zero, the resulting equations are as follows.

    k1α1+(γ11)ki=1ziwi(1+α1wi)ki=1ziwiγ1(1+α1wi)γ11ki=1siwiγ1(1+α1wi)γ11=0, (3.3)
    k2α2+(γ21)ki=1(1zi)wi(1+α2wi)ki=1(1zi)wiγ2(1+α2wi)γ21ki=1tiwiγ2(1+α2wi)γ21=0, (3.4)
    k1γ1+ki=1ziln(1+α1wi)ki=1zi(1+α1wi)γ1ln(1+α1wi)ki=1si(1+α1wi)γ1ln(1+α1wi)=0, (3.5)
    k2γ2+ki=1(1zi)ln(1+α2wi)ki=1(1zi)(1+α2wi)γ2ln(1+α2wi)ki=1ti(1+α2wi)γ2ln(1+α2wi)=0. (3.6)

    It's evident, as indicated in Eqs (3.3)–(3.6), that explicit expressions for the MLEs α1,α2,γ1, and γ2 are not available. Hence, we recommend using numerical iterative methods, like the Newton-Raphson procedure, to derive the values of α1,α2,γ1, and γ2.

    We investigate approximate confidence intervals in this context for the unknown parameters (α1,α2,γ1,γ2), using large sample approximations of the MLEs, which are often known as asymptotic theory. We do this by using the observed FIM to calculate the MLEs' asymptotic variance for the unknown parameters. Next, the observed FIM is represented as:

    I1(α1,α2,γ1,γ2)=(2α212α1α22α1γ12α1γ12α2α12α222α2γ12α2γ22γ1α12γ1α22γ212γ1γ22γ2α12γ2α22γ2γ12γ22)1, (4.1)
    I1(α1,α2,γ1,γ2)=(^var(^α1)cov(^α1,^α2)cov(^α1,^γ1)cov(^α1,^γ2)cov(^α2,^α1)^var(^α2)cov(^α2,^γ1)cov(^α2,^γ2)cov(^γ1,^α1)cov(^γ1,^α2)^var(^γ1)cov(^γ1,^γ2)cov(^γ2,^α1)cov(^γ2,^α2)cov(^γ2,^γ1)^var(^γ2)). (4.2)

    Derived from the log-likelihood function in (3.2), the following is evident:

    k1α21+(γ11)ki=12ziw2i(1+α1wi)(1+α1wi)2ki=1ziw2iγ1(γ11)(1+α1wi)γ12ki=1siw2iγ1(γ11)(1+α1wi)γ12=0, (4.3)
    k2α22+(γ21)ki=12(1zi)w2i(1+α2wi)(1+α2wi)2ki=1(1zi)w2iγ2(γ21)(1+α2wi)γ22ki=1tiw2iγ2(γ21)(1+α2wi)γ22=0, (4.4)
    k1γ21ki=1zi(1+α1wi)α1[ln(1+α1wi)]2ki=1si(1+α1wi)γ1[ln(1+α1wi)]2=0, (4.5)
    k2γ22ki=1(1zi)(1+α2wi)α2[ln(1+α2wi)]2ki=1ti(1+α2wi)γ2[ln(1+α2wi)]2=0. (4.6)

    Calculating the asymptotic confidence intervals (ACIs) for the parameters α1,α2,γ1, and γ2 is feasible by leveraging the asymptotic normality of the maximum likelihood estimators (MLEs). This enables us to estimate the (1μ)100% ACIs for α1,α2,γ1, and γ2 in an approximate manner

    (^α1±Zμ/2^var(^α1)),(^α2±Zμ/2^var(^α2)),(^γ1±Zμ/2^var(^γ1))and(^γ2±Zμ/2^var(^γ2)). (4.7)

    Here, Zμ/2 represents the upper μ/2th percentile of the standard normal distribution.

    In the field of reliability analysis, there are instances where classical estimation using the MLE approach may encounter challenges, particularly when the available data lacks sufficient sampling details. To address this issue, incorporating prior information in conjunction with Bayesian analysis proves beneficial. This section focuses on the Bayesian approach for estimating unknown parameters and deriving the corresponding credible intervals (CRIs).

    In Bayesian statistical inference, the role of the prior distribution is pivotal as it represents our existing knowledge or beliefs regarding the parameters, facilitating a more accurate estimation of the posterior distribution. The selection of an appropriate prior distribution is crucial as it can influence the ultimate results of the inference. The gamma distribution, recognized for its flexibility and favorable properties, is a frequently chosen continuous probability distribution for Bayesian parameter priors. The parameters of the gamma distribution can be adjusted to accommodate diverse prior beliefs. Additionally, the gamma distribution exhibits conjugacy, signifying that when employed as a prior distribution, its product with the likelihood function remains a gamma distribution, simplifying posterior distribution calculations.

    The choice of hyperparameter values, like any other parameter selection in Bayesian analysis, depends on various factors including prior knowledge, the characteristics of the data, and modeling considerations. Without specific details about the context of your simulation, I can provide some general considerations:

    ● Prior Knowledge: If you have prior information about the hyperparameters, such as from previous studies or domain expertise, you might set them based on that knowledge. The values could reflect your beliefs about the likely range or distribution of the true hyperparameters.

    ● Empirical Bayes: If you don't have strong prior knowledge, you might consider using an empirical Bayes approach. This involves estimating the hyperparameters from the data itself. In such cases, the hyperparameters are determined based on the observed data distribution.

    ● Conjugate Priors: If you choose hyperparameters that make your prior distribution conjugate to the likelihood, it can simplify computations. However, this choice might not always reflect your beliefs about the parameters.

    ● Non-Informative Priors: A common choice for hyperparameters, especially in the absence of strong prior information, is to set them to values that make the prior distribution non-informative or weakly informative. For instance, setting hyperparameters to achieve a flat or diffuse prior.

    In this study, we assert that ci and di are both greater than 0, where i takes values 1, 2, 3, and 4. This results in the following relationships:

    π1(α1)αc111ed1α1,α1>0,c1,d1>0,
    π2(α2)αc212ed2α2,α2>0,c2,d2>0,
    π3(γ1)γc311ed3γ1,γ1>0,c3,d3>0.
    π4(γ2)γc412ed4γ2,γ2>0,c4,d4>0.

    In this scenario, ci and di where i=1,2,3,4, are introduced to integrate prior information concerning the unidentified parameters. Consequently, the joint prior density function for α1, α2, γ1, and γ2 is structured as follows:

    π(α1,α2,γ1,γ2)αc111αc212γc311γc412ed1α1d2α2d3γ1d4γ2. (5.1)

    The posterior distribution is a fundamental concept in Bayesian statistics, playing a crucial role in the inference process. It represents the updated belief or knowledge about a parameter or set of parameters after taking into account both prior information and observed data. In Bayesian analysis, the posterior distribution is obtained by combining the prior distribution, which encapsulates our initial beliefs, with the likelihood function, which quantifies the probability of observing the data given the parameter values. The posterior distribution provides a comprehensive summary of uncertainty, allowing us to estimate parameters, make predictions, and conduct various analyses. It serves as a bridge between prior knowledge and observed evidence, enabling us to make informed decisions and update our understanding of the underlying phenomenon.

    By merging Eqs (3.1) and (5.1), it becomes viable to represent the joint posterior density function for α1, α2, γ1 and γ2 as following

    π(α1,α2,γ1,γ2|data)=π(α1,α2,γ1,γ2)L(α1,α2,γ1,γ2|data)0000π(α1,α2,γ1,γ2)L(α1,α2,γ1,γ2|data)dα1dα2dγ1dγ2. (5.2)

    The joint posterior density function for α1, α2, γ1 and γ2 can be formulated as:

    π(α1,α2,γ1,γ2|data)αk1+c111γk1+c211αk2+c312γk2+c412e(γ11)ki=1ziln(1+α1wi)×eki=1zi[1(1+α1wi)]γ1e(γ21)ki=1(1zi)ln(1+α2wi)eki=1(1zi)[1(1+α2wi)]γ2×eki=1si[1(1+α1wi)]γ1eki=1ti[1(1+α2wi)]γ2ed1α1d2α2d3γ1d4γ2. (5.3)

    Evidently, owing to the nonlinear form of (5.3), there is no closed-form solution for the Bayes estimators of α1, α2, γ1, or γ2 when employing the SE and LINEX loss functions. Consequently, we recommend employing the Markov Chain Monte Carlo (MCMC) approach to acquire the Bayes estimates and establish the corresponding CRIs.

    To generate samples using the MCMC approach, we must first determine the conditional posterior distributions of the unknown NHD parameters α1, α2, γ1 and γ2. The conditions are given by

    π1(α1|α2,γ1,γ2)αk1+c111ed1α1ki=1ziln(1+α1wi)eki=1zi[1(1+α1wi)]γ1×eki=1si[1(1+α1wi)]γ1, (5.4)
    π2(α2|α1,γ1,γ2)αk2+c212ed2α2ki=1(1zi)ln(1+α2wi)eki=1(1zi)[1(1+α2wi)]γ2×eki=1ti[1(1+α2wi)]γ2, (5.5)
    π3(γ1|α1,α2,γ2)γk1+c311eγ1[d3+ki=1ziln(1+α1wi)], (5.6)
    π4(γ2|α1,α2,γ1)γk2+c412eγ2[d4+ki=1(1zi)ln(1+α2wi)]. (5.7)

    From Eqs (5.6) and (5.7), it's apparent that the posterior densities of γ1 and γ2 follow the gamma distribution, given that

    π3(γ1|α1,α2,γ2)Gamma(k1+c3,d3+ki=1ziln(1+α1wi)), (5.8)

    and

    π4(γ2|α1,α2,γ1)Gamma(k2+c4,d4+ki=1(1zi)ln(1+α2wi)). (5.9)

    It's evident that gamma densities can be used to generate random samples of γ1 and γ2. However, it's not feasible to algebraically simplify the density functions of π1(α1|α2,γ1,γ2) and π2(α2|α1,γ1,γ2) to well-known distributions. Therefore, obtaining samples directly through conventional procedures is not practical. Hence, we turn to the Markov Chain Monte Carlo (MCMC) approach and generate a sample using Gibbs sampling with the Metropolis-Hastings (M-H) algorithm (see Metropolis et al. [25] and Hastings [23]), employing a standard proposal, as described below.

    Step 1. Begin by employing the initial values (α(0)1,α(0)2,γ(0)1,γ(0)2).

    Step 2. Set j=1.

    Step 3. Generate γ(j)1 from Gamma (k1+c3,d3+ki=1ziln(1+α1wi)).

    Step 4. Generate γ(j)2 from Gamma (k2+c4,d4+ki=1(1zi)ln(1+α2wi)).

    Step 5. Equations (5.4) and (5.5) can be employed to generate α(j)1 and α(j)2 using the Metropolis-Hastings (M-H) algorithm. The recommended normal distributions to use are N(α(j1)1,var(α1)) and N(α(j1)2,var(α2)). In this instance, the major diagonal of the inverted FIM can be utilized to calculate var(α1) and var(α2).

    (I) Produce proposed values α1 and α2 from the corresponding normal distributions.\\ (II) Determine the acceptance probabilities using the following procedure:

    r1=min[1,π1(α1|α(j1)2,γj1,γj2,data)π1(α(j1)1|α(j1)2,γj1,γj2,data],
    r2=min[1,π2(α2|αj1,γj1,γj2,data)π2(α(j1)2|αj1,γj1,γj2,data)].

    (III) Produce a random value u from a uniform distribution within the range of (0,1).

    (IV) If ur1, approve the proposal and assign α(j)1 as α1; otherwise, maintain α(j)1 as α(j1)1.

    (V) If ur2, approve the proposal and assign α(j)2 as α2; otherwise, maintain α(j)2 as α(j1)2.

    Step 7. Set j = j + 1.

    Step 8. Repeat 2–7, for V times. Therefore, the estimated posterior means of (α1,α2,γ1,γ2) represented by λ under the squared error loss function, can be determined as:

    ˆλBS=E[λ|x_]=1VNVi=N+1λ(j). (5.10)

    Lastly, calculate the Bayesian estimates of λ utilizing the LINEX loss function:

    ˆλBL=1aln[1VNVi=N+1eaλ(j)]. (5.11)

    Here, N denotes the burn-in period.

    This section's goal is to assess the potency of the various estimation techniques covered in earlier sections. We demonstrate this by examining an actual dataset and conducting a simulated experiment to evaluate the statistical performance of the estimators under the JPT-II-CS. The computations were carried out using Mathematica ver. 10 software.

    In this section, we conduct simulation studies to assess the performance of the estimation methods developed in the preceding sections. We explore various sample sizes for the two populations, including (m,n) = (10,20),(20,30),(40,50), and different numbers of failures for each sample size, such as (15,20,30),(35,40,50),(60,70,90), respectively. The parameter values for the two populations are set as (α1,α2,γ1,γ2) = (1.5,1.3,0.4,0.3).

    We computed MLEs along with 95% CIs for the parameters (α1,α2,γ1,γ2) across all specified scenarios. This process was repeated 1000 times, and we calculated the mean values of MLEs and their respective lengths. The results are presented in Tables 14. Additionally, we employed informative gamma priors for for α1, α2, γ1, and γ2 in the context of Bayesian estimation under SE and LINEX loss functions. The hyperparameters were set as ci=0.8 and di=2.5, where i=1,2,3,4, with a=3 denoting overestimation and a=3 representing underestimation. We employed the(MCMC approach with 11,000 samples to derive Bayesian estimates for α1, α2, γ1, and γ2, along with 95% CRIs, through 1000 simulations. The initial 1000 values were excluded due to "burn-in". MSE was utilized to evaluate the performance of the generated estimators for α1, α2, γ1, and γ2. Following 1000 repetitions of this process, we computed the mean values of MLEs and their respective lengths. Tables 14 present the results.

    Table 1.  Average value, length and corresponding MSE (in parentheses) of estimates for the parameter α1.
    (m,n) r Scheme Non-Bayesian Bayesian
    MLE Length SE LINEX Length
    a=3 a=3
    (10, 20) 15 (0(14),15) 1.0089 7.7486 1.3852 1.3856 1.3849 0.0611
    (0.2412) (0.0132) (0.0131) (0.0130)
    15 (15,0(14)) 1.9192 9.5216 2.5732 2.5736 2.5728 0.0604
    (0.1757) (1.1517) (1.1526) (1.1509)
    20 (0(19),10) 1.4528 8.1424 1.8099 1.8104 1.8095 0.0633
    (0.0992) (0.0961) (0.0963) (0.0958)
    20 (10,0(19)) 2.0877 9.2575 1.8322 1.8343 1.8302 0.1077
    (0.3454) (0.1104) (0.1118) (0.1090)
    25 (0(7),2,0(4),1,0(3),2,0(8)) 1.6978 6.5167 2.3954 2.3955 2.3952 0.0363
    (0.0391) (0.0175) (0.0191) (0.0144)
    30 (0(30)) 0.9524 4.0016 1.5637 1.5639 1.5635 0.0398
    (0.2999) (0.0041) (0.0041) (0.0040)
    (20, 30) 35 (0(34),15) 1.4461 5.6789 1.7213 1.7214 1.7213 0.0272
    (0.0029) (0.0490) (0.0490) (0.0490)
    35 (15,0(34)) 1.2765 3.9130 1.8908 1.8909 1.8908 0.0223
    (0.1500) (0.0527) (0.0527) (0.0527)
    40 (0(39),10) 1.9292 5.5891 1.2361 1.2366 1.2356 0.0540
    (0.1843) (0.0697) (0.0685) (0.0672)
    40 (10,0(39)) 1.9814 5.9534 1.8851 1.8855 1.8847 0.0601
    (0.2318) (0.1483) (0.1486) (0.1480)
    45 (0(5),3,0(33),2,0(5)) 1.3855 4.2144 0.7017 0.7025 0.7008 0.0709
    (0.2131) (0.0637) (0.0534) (0.0544)
    50 (0(50)) 1.7681 4.4709 1.8890 1.8890 1.8890 0.0261
    (0.0719) (0.0513) (0.0514) (0.0512)
    (40, 50) 60 (0(59),30) 2.0131 6.0591 2.8808 2.8806 2.8802 0.0311
    (0.2633) (0.0684) (0.0688) (0.0679)
    60 (30,0(59)) 0.8016 2.2477 0.5408 0.5407 0.5405 0.0130
    (0.4878) (0.0400) (0.0321) (0.201)
    70 (0(69),20) 0.8469 1.9784 0.6599 0.6599 0.6598 0.0164
    (0.4265) (0.1058) (0.1057) (0.1052)
    70 (20,0(69)) 1.0524 2.3534 1.2987 1.2987 1.2986 0.0140
    (0.2003) (0.0405) (0.0402) (0.0401)
    80 (4,0(8),3,0(60),3,0(9)) 2.4501 5.8484 4.4654 4.4656 4.4651 0.0423
    (0.9027) (0.7933) (0.7922) (0.7824)
    90 (0(90)) 1.8013 3.3309 1.6733 1.6734 1.6731 0.0265
    (0.0908) (0.0765) (0.0768) (0.0763)

     | Show Table
    DownLoad: CSV
    Table 2.  Average value, length and corresponding MSE (in parentheses) of estimates for the parameter α2.
    (m,n) r Scheme Non-Bayesian Bayesian
    MLE Length SE LINEX Length
    a=3 a=3
    (10, 20) 15 (0(14),15) 0.9247 7.4446 1.3417 1.3422 1.3413 0.0636
    (0.1409) (0.0017) (0.0018) (0.0017)
    15 (15,0(14)) 0.1539 0.9332 0.1811 0.1811 0.1811 0.0097
    (0.3134) (0.2519) (0.2519) (0.2519)
    20 (0(19),10) 0.6347 3.2751 0.5126 0.5127 0.5126 0.0150
    (0.4426) (0.6200) (0.6210) (0.6120)
    20 (10,0(19)) 2.6140 8.4100 2.8049 2.8055 2.8043 0.0712
    (0.7267) (0.2647) (0.2664) (0.2629)
    25 (0(7),2,0(4),1,0(3),2,0(8)) 0.6021 2.473 0.8028 0.8029 0.8028 0.0268
    (0.4871) (0.2472) (0.2471) (0.2470)
    30 (0(30)) 1.7301 5.7329 2.7105 2.7107 2.7103 0.0599
    (0.1850) (0.1795) (0.1702) (0.1601)
    (20, 30) 35 (0(34),15) 0.3714 1.8375 1.3453 1.3453 1.3453 0.0082
    (0.1623) (0.0021) (0.0021) (0.0021)
    35 (15,0(34)) 1.8284 5.0672 1.4205 1.4208 1.4202 0.0544
    (0.2792) (0.0145) (0.0146) (0.0144)
    40 (0(39),10) 2.0993 5.2002 0.9454 0.9455 0.9453 0.0287
    (0.6390) (0.1257) (0.1250) (0.1248)
    40 (10,0(39)) 2.5850 8.5699 2.4874 2.4876 2.4872 0.0414
    (0.6512) (0.4099) (0.4095) (0.4090)
    45 (0(5),3,0(33),2,0(5)) 2.649 7.6263 1.4086 1.4088 1.4083 0.0468
    (0.8198) (0.0118) (0.0118) (0.0117)
    50 (0(50)) 2.1930 5.6644 1.8755 1.8482 1.8655 0.0350
    (0.7975) (0.3469) (0.3471) (0.3468)
    (40, 50) 60 (0(59),30) 1.0377 3.2544 1.5236 1.5236 1.5236 0.0207
    (0.0688) (0.0500) (0.0500) (0.0500)
    60 (30,0(59)) 2.1438 4.4834 2.4051 2.4052 2.4050 0.0370
    (0.7120) (0.2213) (0.2217) (0.2210)
    70 (0(69),20) 0.969 2.4503 1.7391 1.7391 1.7391 0.0141
    (0.1096) (0.0928) (0.0924) (0.0920)
    70 (20,0(69)) 1.5667 3.193 1.7013 1.7012 1.7010 0.0154
    (0.0712) (0.0161) (0.0162) (0.0158)
    80 (4,0(8),3,0(60),3,0(9)) 1.3662 2.709 1.6230 1.6222 1.6220 0.0167
    (0.1244) (0.1043) (0.1041) (0.1032)
    90 (0(90)) 1.4794 2.6736 0.8824 0.8820 0.8811 0.0112
    (0.0322) (0.0174) (0.0171) (0.0160)

     | Show Table
    DownLoad: CSV
    Table 3.  Average value, length and corresponding MSE (in parentheses) of estimates for the parameter γ1.
    (m,n) r Scheme Non-Bayesian Bayesian
    MLE Length SE LINEX Length
    a=3 a=3
    (10, 20) 15 (0(14),15) 0.4500 2.0736 1.0201 1.2450 0.8765 1.3259
    (0.0025) (0.0023) (0.0020) (0.0020)
    15 (15,0(14)) 0.3766 0.8338 0.5666 0.7000 0.4843 1.0085
    (0.0201) (0.0270) (0.0278) (0.0268)
    20 (0(19),10) 0.3205 0.8781 0.3128 0.2280 0.2623 0.8047
    (0.1963) (0.1108) (0.1832) (0.0688)
    20 (10,0(19)) 0.4035 0.7331 0.6691 0.7739 0.5954 0.9246
    (0.0854) (0.0724) (0.0724) (0.0382)
    25 (0(7),2,0(4),1,0(3),2,0(8)) 0.5471 1.0630 0.8072 0.9125 0.7279 0.9644
    (0.0216) (0.0165) (0.0127) (0.0107)
    30 (0(30)) 0.6514 1.3684 0.745 0.8343 0.6765 0.8887
    (0.1632) (0.1191) (0.1286) (0.0765)
    (20, 30) 35 (0(34),15) 0.7796 0.7379 0.7796 0.8391 0.7307 0.7379
    (0.1291) (0.1241) (0.1028) (0.1094)
    35 (15,0(34)) 0.4192 0.5150 0.5787 0.6148 0.5477 0.5777
    (0.1010) (0.0320) (0.0461) (0.0461)
    40 (0(39),10) 0.2902 0.3452 0.8947 0.9753 0.8295 0.8497
    (0.2144) (0.0125) (0.0122) (0.0121)
    40 (10,0(39)) 0.4189 0.4923 0.4864 0.4291 0.4499 0.2145
    (0.0930) (0.0820) (0.0724) (0.0620)
    45 (0(5),3,0(33),2,0(5)) 0.3938 0.5113 0.4844 0.4782 0.4653 0.3813
    (0.434) (0.0354) (0.0352) (0.0344)
    50 (0(50)) 0.3731 0.3433 0.4990 0.4265 0.4744 0.3129
    (0.0898) (0.0396) (0.0394) (0.0390)
    (40, 50) 60 (0(59),30) 0.2999 0.4200 0.5524 0.4799 0.5277 0.3152
    (0.0954) (0.0779) (0.0462) (0.0563)
    60 (30,0(59)) 0.6208 0.8487 0.5244 0.5301 0.5030 0.4250
    (0.0488) (0.0244) (0.0223) (0.0222)
    70 (0(69),20) 0.4048 0.4479 0.3692 0.3196 0.3245 0.1888
    (0.4478) (0.1177) (0.3900) (0.3548)
    70 (20,0(69)) 0.6135 0.6612 0.6751 0.6168 0.6376 0.4304
    (0.0456) (0.0258) (0.0671) (0.0915)
    80 (4,0(8),3,0(60),3,0(9)) 0.3866 0.3552 0.5170 0.5274 0.5070 0.3215
    (0.0212) (0.0137) (0.0162) (0.0115)
    90 (0(90)) 0.3746 0.2523 0.4336 0.4443 0.4234 0.2224
    (0.1064) (0.0178) (0.0208) (0.0152)

     | Show Table
    DownLoad: CSV
    Table 4.  Average value, length and corresponding MSE (in parentheses) of estimates for the parameter γ2.
    (m,n) r Scheme Non-Bayesian Bayesian
    MLE Length SE LINEX Length
    a=3 a=3
    (10, 20) 15 (0(14),15) 0.4110 2.0336 1.0052 1.2843 0.8484 1.4092
    (0.0123) (0.0119) (0.0112) (0.0101)
    15 (15,0(14)) 0.8193 2.8032 1.0506 1.2216 0.9307 1.2030
    (0.2696) (0.3634) (0.4493) (0.3978)
    20 (0(19),10) 0.5335 1.5963 1.3381 1.6395 1.1490 1.5159
    (0.0545) (0.0777) (0.7941) (0.7208)
    20 (10,0(19)) 0.2255 0.2141 0.3820 0.3992 0.3666 0.4006
    (0.0085) (0.0067) (0.0078) (0.0044)
    25 (0(7),2,0(4),1,0(3),2,0(8)) 0.4095 0.6813 0.6038 0.6416 0.5715 0.5963
    (0.0120) (0.0092) (0.0081) (0.0073)
    30 (0(30)) 0.2772 0.2753 0.3940 0.4056 0.3833 0.3367
    (0.0188) (0.0088) (0.0075) (0.0069)
    (20, 30) 35 (0(34),15) 0.7089 2.2453 0.8012 0.8529 0.7567 0.6981
    (0.1672) (0.1512) (0.1057) (0.1086)
    35 (15,0(34)) 0.2850 0.2466 0.5229 0.5426 0.5049 0.4333
    (0.1491) (0.0497) (0.0589) (0.0420)
    40 (0(39),10) 0.3109 0.3064 0.3965 0.3623 0.3410 0.2909
    (0.5851) (0.4866) (0.4851) (0.4840)
    40 (10,0(39)) 0.3604 0.2478 0.3265 0.3382 0.3156 0.2302
    (0.1458) (0.0160) (0.0191) (0.0134)
    45 (0(5),3,0(33),2,0(5)) 0.6419 0.6039 0.5232 0.5379 0.5095 0.3788
    (0.3430) (0.0498) (0.0566) (0.0439)
    50 (0(50)) 0.2571 0.1927 0.2427 0.2527 0.2332 0.1163
    (0.1144) (0.0204) (0.0233) (0.0177)
    (40, 50) 60 (0(59),30) 0.4642 0.7702 0.9308 0.9680 0.8969 0.6030
    (0.0270) (0.0179) (0.0171) (0.0165)
    60 (30,0(59)) 0.2766 0.2781 0.2510 0.2591 0.2430 0.1898
    (0.1222) (0.0228) (0.0254) (0.0204)
    70 (0(69),20) 0.3746 0.4258 0.3499 0.3682 0.3329 0.2235
    (0.4441) (0.1225) (0.1356) (0.1108)
    70 (20,0(69)) 0.2747 0.1723 0.4435 0.4506 0.4368 0.2654
    (0.1324) (0.0206) (0.0227) (0.0187)
    80 (4,0(8),3,0(60),3,0(9)) 0.6604 0.5647 0.5858 0.5983 0.5741 0.3527
    (0.1204) (0.0817) (0.0890) (0.0751)
    90 (0(90)) 0.2699 0.1466 0.2436 0.2524 0.2351 0.1170
    (0.1100) (0.0593) (0.0637) (0.0553)

     | Show Table
    DownLoad: CSV

    Based on the aforementioned data, several conclusions can be drawn:

    (1) Tables 1 to 4 reveal that, in most cases, Bayesian estimates outperform MLEs in terms of MSEs.

    (2) Examining Tables 1 to 4, it becomes apparent that CRIs exhibit the shortest average length of intervals compared to the average length of CIs, indicating the superiority of Bayesian estimators over MLEs.

    (3) Notably, Bayesian estimation using the LINEX loss function at a=3 surpasses Bayesian estimators using the LINEX loss function at a=3 and the SE loss function in terms of MSEs.

    (4) The results indicate that both MLEs and Bayesian estimators yield favorable outcomes for two sample lines, whether they have the same or different population numbers, suggesting the model's suitability for various sample situations.

    The data represent the air-conditioning system failure times (in hours) for planes 7913 and 7914, originally sourced from Proschan [30]. The assumption is made that the two datasets are independent, and within each dataset, the failure times are also considered independent. The data is provided below.

    Data 1. (Plane 7913): 1, 4, 11, 16, 18, 18, 18, 24, 31, 39, 46, 51, 54, 63, 68, 77, 80, 82, 97, 106, 111, 141, 142, 163, 191, 206, 216.

    Data 2. (Plane 7914): 3, 5, 5, 13, 14, 15, 22, 22, 23, 30, 36, 39, 44, 46, 50, 72, 79, 88, 97, 102, 139, 188, 197, 210.

    Table 5 displays the outcomes of the Kolmogorov-Smirnov (K-S) test, employed to assess the data's adherence to the NHD.

    Table 5.  K-S test and P-value.
    Data set Size (n) K-S (Calculated) K-S (5% Significance) P-value
    I 27 0.1011 0.2544 0.9192
    II 24 0.0870 0.2693 0.9858

     | Show Table
    DownLoad: CSV

    Taking into account the details in Table 5, it is apparent that the computed K-S values for the data are lower than the corresponding values expected at a significance level of 5%. Additionally, we have noted notably high P-values. Consequently, we can reasonably conclude that the NHD serves as a well-fitted model for the data. Moreover, we have generated empirical S(x) and fitted S(x) for each dataset, as depicted in Figures 2 and 3, respectively. These plots provide additional confirmation that the NHD model offers a superior fit to the data.

    Figure 2.  Plots of fitted functions of the NHD for data set I.
    Figure 3.  Plots of fitted functions of the NHD for data set II.

    We generated a JPT-IISC sample from the aforementioned datasets using the following censoring scheme. For the first sample, set m=27, and for the second sample, set n=24, implementing JPT-IISC with r=20. The censoring vectors are defined as:

    S=(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,16),R=(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,31),T=(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,15).

    Here are the generated datasets:

    w=(1,3,4,5,5,11,13,14,15,16,18,18,18,22,22,23,24,30,31,36),z=(1,0,1,0,0,1,0,0,0,1,1,1,1,0,0,0,1,0,1,0).

    We obtained estimates for α1,α2,γ1, and γ2 using the MLE method, depending on the data type used in this study. The corresponding results are displayed in Table 6, whereas Tables 7 and 8 present the 95% ACIs for α1,α2,γ1, and γ2. We utilized the MCMC method for Bayesian estimation conducting 20000 iterations to ensure convergence we excluded the initial 5000 iterations as 'burn in' We have selected hyperparameters ci and di as 0.0001, approaching values near zero for the prior distributions. Bayesian estimates for α1,α2,γ1, and γ2 were derived using both the SE loss and LINEX loss functions, with the corresponding results detailed in Table 6. Additionally, the 95% CRIs for α1,α2,γ1, and γ2 are provided in Tables 7 and 8.

    Table 6.  Different point estimates of (α1,α2,γ1,γ2).
    Parameters MLE MCMC
    SE LINEX
    a=3.0 a=104 a=3.0
    α1 0.0211 0.0612 0.0612 0.0612 0.0612
    α2 0.0112 0.0406 0.0406 0.0406 0.0406
    γ1 0.6548 1.5915 2.2824 1.5915 1.2740
    γ2 1.2882 2.0091 2.8235 2.0091 1.6094

     | Show Table
    DownLoad: CSV
    Table 7.  95% CIs and CIRs for (α1,α2).
    Method α1 α2
    Lower Upper Length Lower Upper Length
    CI -0.0801 0.1223 0.2024 -0.0392 0.0615 0.1008
    CRI 0.0606 0.0621 0.0014 0.0401 0.0411 0.0010

     | Show Table
    DownLoad: CSV
    Table 8.  95% CIs and CIRs for (γ1,γ2).
    Method γ1 γ2
    Lower Upper Length Lower Upper Length
    CI -1.7852 3.0948 4.8801 -3.6776 6.2541 9.9317
    CRI 0.7213 2.8120 2.0906 1.0133 3.3537 2.3404

     | Show Table
    DownLoad: CSV

    In this paper, we explored statistical inference for two populations characterized by NHD. These distributions feature distinct shape and scale parameters, and our analysis focused on a JPT-II-CS. Assuming life distribution for both populations, we derived maximum likelihood estimates for unknown parameters and performed Bayesian estimation under gamma and non-information priors. Two loss functions, namely the SE and LINEX loss functions, were employed. To assess the effectiveness of the proposed estimates, we conducted Monte Carlo simulation experiments, revealing that Bayes estimates, along with their associated CRIs, outperform other estimators. Finally, we presented a numerical example to illustrate the inferential results established in this study.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This research project was supported by the Researchers Supporting Project Number (RSP2024R488), King Saud University, Riyadh, Saudi Arabia.

    The authors declare no conflict of interest.



    [1] N. Dunford, J. T. Schwartz, Linear operators, part 1: General theory. John Wiley & Sons, 1988 Feb 23.
    [2] E. Kreyszig, Introductory functional analysis with applications, New York: wiley, 1978 Jan.
    [3] E. Malkowsky, V. Rakočević, Advanced functional analysis, CRC Press, 2019 Feb 25.
    [4] W. Rudin, Principles of mathematical analysis, New York: McGraw-hill, 1964 Jan.
    [5] D. Wardowski, Family of mappings with an equicontractive-type condition, J. Fix. Point Theory A., 22 (2020), 55. https://doi.org/10.1007/s11784-020-00789-2 doi: 10.1007/s11784-020-00789-2
    [6] D. R. Smart, Fixed point theorems, Cup Archive, 1980 Feb 14.
    [7] T. A. Burton, A fixed-point theorem of Krasnoselskii, Appl. Math. Lett., 11 (1998), 85–89. https://doi.org/10.1016/S0893-9659(98)00016-0 doi: 10.1016/S0893-9659(98)00016-0
    [8] Y. Z. Chen, Krasnoselskii-type fixed point theorems using α-concave operators, J. Fix.Point Theory A., 22 (2020), 52. https://doi.org/10.1007/s11784-020-00792-7 doi: 10.1007/s11784-020-00792-7
    [9] S. Park, Generalizations of the Krasnoselskii fixed point theorem, Nonlinear Anal-Theory,, 67 (2007), 3401–3410. https://doi.org/10.1016/j.na.2006.10.024 doi: 10.1016/j.na.2006.10.024
    [10] E. Pourhadi, R. Saadati, Z. Kadelburg, Some Krasnosel'skii-type fixed point theorems for Meir–Keeler-type mappings, Nonlinear Anal. Model., 25 (2020), 257–265. https://orcid.org/0000-0003-3056-4299
    [11] G. Bal, Diffusion approximation of radiative transfer equations in a channel, Transport Theor. Stat., 30 (2001), 269–293. https://doi.org/10.1081/TT-100105370 doi: 10.1081/TT-100105370
    [12] A. Peraiah, An introduction to radiative transfer: Methods and applications in astrophysics, Cambridge University Press, 2002.
    [13] E. Zeidler, P. R. Wadsack, Nonlinear functional analysis and Its Applications: Fixed-point Theorems/Transl, by Peter R. Wadsack. Springer-Verlag, 1993.
    [14] F. F Bonsall, K. B Vedak, Lectures on some fixed point theorems of functional analysis, Bombay: Tata Institute of Fundamental Research, 1962.
    [15] G. L. Karakostas, An extension of Krasnoselskiĭ's fixed point theorem for contractions and compact mappings, Topol. Method. Nonl. An., 22 (2003), 181–191. https://doi.org/10.18261/ISSN1500-1571-2004-03-12 doi: 10.18261/ISSN1500-1571-2004-03-12
    [16] W. R. Melvin, Some extensions of the Krasnoselskii fixed point theorems, J. Differ. Equations., 11 (1972), 335–348. https://doi.org/10.1016/0022-0396(72)90049-6 doi: 10.1016/0022-0396(72)90049-6
    [17] M. Z. Nashed, J. S. Wong, Some variants of a fixed point theorem of Krasnoselskii and applications to nonlinear integral equations, J. Math. Mec., 18 (1969), 767–777. Available from: https://www.jstor.org/stable/24893136
    [18] T. Xiang, R. Yuan, Critical type of Krasnosel'skii fixed point theorem, P. Am. Math. Soc., 139 (2011), 1033–1044. https://doi.org/10.1090/S0002-9939-2010-10517-8 doi: 10.1090/S0002-9939-2010-10517-8
    [19] T. Xiang, R. Yuan, A class of expansive-type Krasnosel'skii fixed point theorems, Nonlinear Anal. Theor., 71 (2009), 3229–3339. https://doi.org/10.1016/j.na.2009.01.197 doi: 10.1016/j.na.2009.01.197
    [20] T. Xiang, Notes on expansive mappings and a partial answer to Nirenberg's problem, Electron. J. Differ. Eq., 2013 (2013), 1–6. Available from: http://ejde.math.txstate.edu or http://ejde.math.unt.eduftpejde.math.txstate.edu
    [21] O. Diekmann, S. A. Van Gils, S. M. V. Lunel, H. O. Walther, Delay equations: Functional-, complex-, and nonlinear analysis, Springer Science & Business Media, 2012 Dec 6.
    [22] N. Ahmad, N. Mehmood, A. Al-Rawashdeh, Some variants of Krasnoselskii-Type fixed point results for equiexpansive mappings with applications, J. Function Space., 2021 (2021), Article ID 2648057. https://doi.org/10.1155/2021/2648057 doi: 10.1155/2021/2648057
    [23] J. M. A Toledano, T. D. Benavides, G. L. Acedo, Measures of noncompactness in metric fixed point theory, Oper. Theory Adv. Appl., 99 (1997).
    [24] R. R Akhmerov, M. I Kamenskii, A. S. Potapov, A. E. Rodkina, B. N. Sadovskii, Measures of noncompactness and condensing operators, Basel: Birkhä user, 1992.
    [25] J. Banaś, On measures of noncompactness in Banach spaces, Comment. Math. Univ. Ca., 21 (1980), 131–143. Available from: http://dml.cz/dmlcz/105982
    [26] J. Banaś, M. Jleli, M. Mursaleen, B. Samet, C. Vetro, Eds, Advances in nonlinear analysis via the concept of measure of noncompactness, Singapore: Springer Singapore; 2017 Apr 25.
    [27] V. I. Istratesecu, On a measure of noncompactness, B. Math. Soc. Sci. Math., 16 (1972), 195–197.
    [28] V. Rakočević, Measures of noncompactness and some applications, Filomat, 1 (1998), 87–120. Available from: https://www.jstor.org/stable/43999286
    [29] T. Xiang, S. G. Georgiev, Noncompact-type Krasnoselskii fixed-point theorems and their applications, Math. Method. Appl. Sci., 39 (2016), 833–863. https://doi.org/10.1002/mma.3525 doi: 10.1002/mma.3525
  • This article has been cited by:

    1. Mustafa M. Hasaballah, Oluwafemi Samson Balogun, M. E. Bakr, Frequentist and Bayesian approach for the generalized logistic lifetime model with applications to air-conditioning system failure times under joint progressive censoring data, 2024, 9, 2473-6988, 29346, 10.3934/math.20241422
    2. Dina A. Ramadan, Yusra A. Tashkandy, M. E. Bakr, Oluwafemi Samson Balogun, Mustafa M. Hasaballah, Analysis of Marshall–Olkin extended Gumbel type-II distribution under progressive type-II censoring with applications, 2024, 14, 2158-3226, 10.1063/5.0210905
    3. Mustafa M Hasaballah, Oluwafemi Samson Balogun, M E Bakr, Point and interval estimation based on joint progressive censoring data from two Rayleigh-Weibull distribution with applications, 2024, 99, 0031-8949, 085239, 10.1088/1402-4896/ad6107
    4. Mustafa M. Hasaballah, Yusra A. Tashkandy, Oluwafemi Samson Balogun, Mahmoud E. Bakr, Bayesian inference for two populations of Lomax distribution under joint progressive Type‐II censoring schemes with engineering applications, 2024, 40, 0748-8017, 4335, 10.1002/qre.3633
    5. Maysaa Elmahi Abd Elwahab, Ahmed Elshahhat, Ohud A. Alqasem, Mazen Nassar, Reliability analysis of new jointly Type-II hybrid NH censored data and its modeling for three engineering cases, 2025, 113, 11100168, 347, 10.1016/j.aej.2024.11.032
    6. Ohud A. Alqasem, Ahmed Elshahhat, Maysaa Elmahi Abd Elwahab, Mazen Nassar, Evaluation of two inverted Nadarajah–Haghighi production lines via joint progressive type-II censoring with two physical applications, 2025, 15, 2158-3226, 10.1063/5.0249890
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1153) PDF downloads(30) Cited by(0)

Figures and Tables

Figures(3)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog