Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

Statistical inference of an α-quantile past lifetime function with applications

  • In reliability engineering and survival analysis, quantile functions are fundamental and often the most natural way to represent probability distributions and data samples. In this paper, the α-quantile function of past lifetime was estimated for right-censored data by applying the Kaplan-Meier survival estimator. The weak convergence of the proposed estimator to a Gaussian process was investigated. A confidence interval for the α-quantile of the past life function that does not depend on the density function was proposed. The strong convergence of the estimator to a Gaussian process was also discussed. The properties of the estimator and the confidence interval were investigated in a simulation study. Finally, two real datasets were analyzed.

    Citation: Mohamed Kayid. Statistical inference of an α-quantile past lifetime function with applications[J]. AIMS Mathematics, 2024, 9(6): 15346-15360. doi: 10.3934/math.2024745

    Related Papers:

    [1] Qianwei Zhang, Zhihua Yang, Binwei Gui . Two-stage network data envelopment analysis production games. AIMS Mathematics, 2024, 9(2): 4925-4961. doi: 10.3934/math.2024240
    [2] Shravan Luckraz . On the representation of informational structures in games. AIMS Mathematics, 2022, 7(6): 9976-9988. doi: 10.3934/math.2022556
    [3] Bei Yuan . Mathematical modeling of intellectual capital asymmetric information game in financial enterprises. AIMS Mathematics, 2024, 9(3): 5708-5721. doi: 10.3934/math.2024277
    [4] Hamidou Tembine . Mean-field-type games. AIMS Mathematics, 2017, 2(4): 706-735. doi: 10.3934/Math.2017.4.706
    [5] Xiao Wu, Qi Wang, Yinying Kong . Two-person zero-sum stochastic games with varying discount factors. AIMS Mathematics, 2021, 6(10): 11516-11529. doi: 10.3934/math.2021668
    [6] Zifu Fan, Youpeng Tao, Wei Zhang, Kexin Fan, Jiaojiao Cheng . Research on open and shared data from government-enterprise cooperation based on a stochastic differential game. AIMS Mathematics, 2023, 8(2): 4726-4752. doi: 10.3934/math.2023234
    [7] Zhimin Liu . Data-driven two-stage sparse distributionally robust risk optimization model for location allocation problems under uncertain environment. AIMS Mathematics, 2023, 8(2): 2910-2939. doi: 10.3934/math.2023152
    [8] Peng Yang . Robust optimal reinsurance-investment problem for n competitive and cooperative insurers under ambiguity aversion. AIMS Mathematics, 2023, 8(10): 25131-25163. doi: 10.3934/math.20231283
    [9] Chengli Hu, Ping Liu, Hongtao Yang, Shi Yin, Kifayat Ullah . A novel evolution model to investigate the collaborative innovation mechanism of green intelligent building materials enterprises for construction 5.0. AIMS Mathematics, 2023, 8(4): 8117-8143. doi: 10.3934/math.2023410
    [10] Lin Xu, Linlin Wang, Hao Wang, Liming Zhang . Optimal investment game for two regulated players with regime switching. AIMS Mathematics, 2024, 9(12): 34674-34704. doi: 10.3934/math.20241651
  • In reliability engineering and survival analysis, quantile functions are fundamental and often the most natural way to represent probability distributions and data samples. In this paper, the α-quantile function of past lifetime was estimated for right-censored data by applying the Kaplan-Meier survival estimator. The weak convergence of the proposed estimator to a Gaussian process was investigated. A confidence interval for the α-quantile of the past life function that does not depend on the density function was proposed. The strong convergence of the estimator to a Gaussian process was also discussed. The properties of the estimator and the confidence interval were investigated in a simulation study. Finally, two real datasets were analyzed.



    High-quality decisions rely on obtained information. When a decision-maker makes a so-called here-and-now (HN) decision, it is necessary to consider the uncertain parameter that reveals in the future, such as market demand in the business setting, or the scale of the disaster in humanitarian relief. Stochastic programming (SP) is an often-used framework for addressing such problems. The distribution of the uncertainty considered by the decision-maker is an approximate representation of the true status. In this setting, to improve decision quality, a possible way is to acquire extra information. However, the information obtained is usually imperfect [1,2]. To evaluate the value of the information, a Bayesian calculation is usually required to assess posteriori probabilities of uncertainties [3,4,5].

    In many applications, however, the probabilistic structure used in the Bayesian update might be unavailable due to the following two reasons.

    First, assessing likelihood functions is difficult and uncommon [6], as it requires abundant historical data [7]. However, data scarcity exists in a wide range of industries [8,9,10,11]. For example, during the design and production stage of a new product, information on future demand is usually unavailable. As a consequence, the planning decisions will be made based on limited data. Moreover, in the disaster setting, a serious natural disaster in a given area seldom actually happens in practice. For instance, only seven major earthquakes occurred in the Longmengshan Fault in China since 1933 [11]. As a result, the data that disaster preparedness depends on is also scarce. Consequently, many applications are facing the challenge of data scarcity, which makes the classical Bayesian approach inapplicable.

    Second, in the setting of exogenous information acquisition, information providers, e.g., consulting firms, may have rich information; while decision-makers, who are in an information disadvantage position, usually fail to get exact probabilistic interdependent relationships. Accordingly, it is a problematic and cumbersome task for decision-makers to evaluate the value of the additional information with the help of Bayes' theorem in such a setting.

    Thus, it is necessary to consider the challenge of data scarcity in the interactions between stochastic optimization and information management. The question is how to evaluate and acquire imperfect information for decision-makers using SP in the scarce-data setting.

    In the stochastic environment, to evaluate the value of information, a classical way is to focus on the expected value of information (EVOI), which integrates all economic results under the possible final realizations of the uncertainties. As is described before, the posteriori probabilities of the realizations can be calculated by the Bayesian approach under the given likelihood functions. Then, the EVOI can be measured by the expected added value from the additional information, which is the difference between the benefits under the priori and posteriori probabilities. Given the fact that the information acquired may be either accurate or inaccurate, the EVOI is further referred to as the so-called Expected Value of Perfect Information (EVPI) or Expected Value of Imperfect Information (EVII), respectively. The former also provides an upper bound on how much a company could pay for any information acquisition activity [3,12,13]. The latter can be used to measure the value of imperfect information [1,14] and has drawn lots of research attention [15,16,17] in the Bayesian context.

    Unlike the aforementioned methods, the model established in this paper focuses on the scarce-data environment where the required probabilistic structure for the Bayesian model is unavailable. Moreover, the following approaches are taken to mitigate the negative influence of data scarcity on decision-making quality.

    First, add flexibility to decision-making procedures to improve decision-making outcomes. To be specific, the two-stage optimization approach is incorporated into our study, in which decisions are composed of HN and recourse actions. The former should be determined now, while the latter can be given when uncertain parameters reveal their values. By doing so, much better results can be generated compared to the static decision-making way [18]. The two-stage SP model is thus adopted here.

    Moreover, acquiring exogenous information is another efficient way to meet the scarce-data challenge. In such a setting, an information acquisition game involving an information provider is designed. The problem is whether it is beneficial to motivate the provider to enhance information accuracy, especially when the enhancement is costly. Does an optimal coordination mechanism exist?

    Hence, this paper concentrates on the information evaluation and acquisition for decision-makers who adopt two-stage SP. The EVII evaluation and costly information acquisition game have been extensively studied in previous works, and most assume the probabilistic interdependent relationship is available. However, those studies have actually left the questions mentioned above unexplored. To fill this gap, we perform the following research.

    First, a robust way is proposed to identify the worst probabilistic structure. Specifically, to characterize the information imperfectness, a budget value is introduced, i.e., the information inaccuracy ratio (IIR), which can be easily obtained from limited historical observations. Then, under the ambiguity set constrained by the budget value, a robust way is adopted to replace the Bayesian calculation, and thus, the classic wait-and-see (WS) solution in two-stage SP is extended to the robust setting. The problem is formulated as a max-min-min model with a bi-level structure and three ways are developed to address the model optimally. Thus, the EVII can be measured by the difference between the value of the robust WS and the expected value with the priori distribution of underlying uncertainties only. Now that the EVII is calculated, when the imperfect additional information is worthless can also be identified.

    Second, a costly information acquisition game is studied, in which the decision-maker, who chooses the two-stage SP, can acquire an imperfect forecast from an exogenous information provider. To model this issue, a Stackelberg game is proposed and a linear compensation contract is designed to realize the global optimum. Finally, we show the application of our study in a two-stage shipment problem.

    As such, our study differs from the previous studies in both the problems addressed and the approach used. More specifically, this paper focuses on the two-stage SP with imperfect information update and the costly information acquisition cooperation in the scarce-data setting, which are involved in a broad family of uncertain optimization and decision-making problems. In terms of approach, the classical Bayesian structure is inapplicable due to the unavailability of likelihood functions. Thus, a novel and robust model is created in this paper and under the model, several approaches are proposed to evaluate the EVII by identifying the worst Bayesian structure.

    The remainder of the paper is organized as follows. Section 2 reviews the relevant literature and summarizes the contributions. Section 3 focuses on the EVII in the two-stage SP, including its assumption, definition, and computation. Then, the costly information acquisition game is studied in Section 4. Section 5 applies the proposed approach to a two-stage production and shipment problem. Finally, Section 6 contains some concluding remarks. All the proofs are provided in the appendix.

    This paper connects the studies on the EVOI and costly information acquisition game. In this section, we briefly discuss the literature related to these two streams.

    Here, we focus on the EVOI in terms of the economic consequences of decision-making. To evaluate the EVOI, one of the most well-known metrics is EVPI, which measures the expected gain based on perfect information [13]. It has found widespread applications, e.g., in manufacturing [19], in supply chain risk mitigation [20], and in power planning [21].

    Nevertheless, in practice, it is impossible to get perfect information [1,2]. Thus, an alternative metric, i.e., the EVII, is developed. Howard [3] is one of the pioneers who apply the Bayes-based approach in estimating the EVOI. The imperfectness of the information system can be captured by the likelihood functions between the state of nature and additional information. After that, existing works study the combination of decision analysis and EVII in various settings. A few papers focus on specific application areas, such as medical health [2], oil and gas [6], portfolio [22], and newsvendor problems [23]. In addition, some studies consider different problem settings, such as multiple information sources [14,24], and multicriteria analysis [15,16,25]. Furthermore, in a Bayesian update, the additional information can take the forms of point or probabilistic estimates [26], which also draws corresponding research [27,28]. These works incorporate the EVII into the decision analysis context and evaluate it by economic impacts. However, in the OR context, the literature is limited and some works incorporate Bayesian update into two-stage or multi-stage SP. For instance, Morton et al. [29] combine Bayesian prediction and two-stage SP to address uncertain up-times of manufacturing equipment and uncertain production rates in an employee scheduling problem. Dowson et al. [30] design a multi-stage SP formulation to incorporate belief states, which can be captured by a Bayesian update.

    Unlike the previous works, this paper focuses on the scarce-data challenge, in which the probabilistic structure used in information update might be unavailable. To address this issue, the point estimation way is applicable here and a robust approach is developed to handle the uncertain information update in the two-stage SP setting. Up to now, far too little attention has been paid to this issue.

    Our work is also related to the costly information acquisition game, in which most studies focus on the applications for auction or supply chain problems.

    Regarding the auction, previous works consider that bidders have no or limited information about their valuations for items sold before the auction begins, and they can determine whether to implement costly information acquisition or not before or during the auction. For example, Compte and Jehiel [31] consider the costly information acquisition in an ascending price auction. The bidder acquires his valuation information during the auction at a known cost. Miettinen [32] studies a similar problem in a Dutch auction. Azevedo et al. [33] design a channel auction that combines English and Dutch auctions. It allows bidders to access the information by incurring a cost. Golrezaei and Nazerzadeh [34] design a two-stage mechanism in which the auctioneer can strategically control the information access based on a second-price auction. In these studies, the bidders can refine their valuations through information acquisition.

    Moreover, since information is a crucial driver of supply chain performance improvement, there has been a growing body of literature that quantifies the EVOI. Some of them take the costly information acquisition activity into account. For instance, Fu and Zhu [35] focus on a two-tier supply chain consisting of a supplier and a buyer with the consideration of quantity discount, buy-back, and revenue-sharing contracts. During the implementation of the contracts, the buyer can acquire endogenous demand information with a cost. Li et al. [36] concentrate on upstream firms' information acquisition activities. Differing from these works, Fu et al. [37] focus on another market structure composed of two competing firms. Both firms can determine their production quantities with a costly forecast. These papers have all taken uncertain demand into account. Quality information, which can influence market demand, has also drawn some research in recent years [38,39].

    In summary, the aforementioned works have studied the trade-off between the acquisition cost and the EVOI obtained within auction or supply chain operations, in which Bayesian calculations are performed based on the probabilistic structure of information. In other words, the challenge of data scarcity is ignored. In such a context, a two-stage decision-making way can bring better performance [18]. However, how to incorporate the two-stage way into the costly information acquisition game remains unexplored.

    Based on the aforementioned literature review, the contribution of this paper is twofold.

    (ⅰ) Focus on a two-stage SP with imperfect information update and study the EVII in the scarce-data setting, which is ignored in previous works. We develop a novel and robust model to evaluate the WS value, utilizing only an IIR parameter instead of a complicated probabilistic structure. Three ways are taken to address the intractability of the proposed model, including numerical, analytical, and equivalent reformulation, which are suitable for different settings.

    (ⅱ) Furthermore, optimization and information management are two significant challenges to many real-world problems. Our study takes the two-stage SP and information quality improvement into account and sheds light on their interactions by developing a costly information acquisition game. Moreover, a win-win coordination mechanism is designed for the game. To the best of our knowledge, no similar work has been done before.

    This section first reviews the general formulation of the two-stage SP and the well-known EVPI concept. Second, the robust WS in the imperfect information setting is proposed and a bi-level model is developed to evaluate the robust WS value. Finally, three approaches are designed to calculate the model and then the EVII is obtained in a robust setting.

    Before that, the abbreviations, sets, parameters, and variables used in the model formulation, including the evaluation of the EVII and the costly information acquisition game, are given below (see Table 1).

    Table 1.  Descriptions of abbreviations, sets, parameters, and variables in the model formulation.
    Symbols Description
    SP Stochastic programming
    HN Here-and-now
    WS Wait-and-see
    RP Recourse problem
    EVOI The expected value of information
    EVPI The expected value of perfect information
    EVII The expected value of imperfect information
    IIR Information inaccuracy ratio
    S, s The set and indices of scenarios, sS
    si The forecast scenario, siS
    sj The realization, sjS
    λ(i|j) The conditional probability of the forecast scenario si occurring given the realization sj
    pj The probability of scenario sj
    Γ The IIR value
    c Cost coefficient associated with the HN decisions
    q Cost coefficient associated with the recourse decisions
    τ Unit information quality cost associated with the existing Γ
    τ1, τ2 Unit information improvement cost associated with ΔΓ and ΔΓ2
    θ The payoff from the SP decision-maker to the information provider for her information with the IIR Γ
    x, X The HN decisions and the feasible space
    y, Y The recourse decisions and the feasible space
    ΔΓ The value of information quality improvement
    α, β The variables specify a linear compensation contract

     | Show Table
    DownLoad: CSV

    Consider a two-stage linear SP, in which HN decisions x(Rn1 or Zn1) should be determined before uncertain parameters reveal their values, while recourse decisions y(Rn2 or Zn2) are dependent on x and the realization of the parameters. Let S be the set of scenarios describing the underlying uncertainties and s(S) be one realization among the scenarios.

    Thus, the objective of the two-stage SP, a.k.a., recourse problem (RP), is to minimize the total cost of two stages, in which the second-stage recourse cost is an expected value under a set of discrete scenarios:

    RP=minxX(cTx+Es(miny(s)Y(x,s)q(s)Ty(s))), (1)

    where X is the feasible space of x, and Y(x,s) is the feasible space of y defined by the given x and realization s. For y and coefficient q are scenario-dependent, denote them as y(s) and q(s), respectively. cRn1, qRn2. The RP value gives the expected performance without additional information.

    Moreover, Madansky [40] introduces WS value. It is an expected value that can be calculated by first acquiring the perfect information and then making the best decision:

    WS=Es(minx(s)X(cTx(s)+miny(s)Y(x(s),s)q(s)Ty(s))), (2)

    where both the HN decisions and recourse decisions are given based on the perfect information s. Thus, the WS is an expected value of all possible s.

    Therefore, the EVPI is, by definition, the difference between the WS and RP, namely,

    EVPI=RPWS. (3)

    This metric also measures the upper bound of the cost of acquiring complete (and accurate) information [41].

    In this section, the WS value is extended to the imperfect information setting and a robust formulation is developed to address data scarcity.

    Begin with the description of information inaccuracy. Specifically, consider that the decision-maker has knowledge about the distribution of scenarios, but is usually unsure which one will happen. The decision-maker can learn the future status by a forecast; however, the information obtained is usually inaccurate.

    To formulate the case, let s(S) be the scenario that will happen in the future, but the point forecast is si(S). The probability of the corresponding misinformation is denoted as λi([0,1]), and thus, the total inaccurate probability is siS,sisλi. Define the IIR, denoted as Γ, which is the upper bound of the total inaccurate probability. The exact value of λi is hard to obtain. In this case, assume Γ is available to the decision-maker since historical inaccurate forecasts can be observed even the data is scarce. Estimate it by the proportion of the number of inaccurate forecasts to the total number of forecasts. Hence, we have:

    siSλi=1,siS,sisλiΓ<1. (4)

    Then, evaluate the EVII by estimating the expected gain of imperfect additional information. As is discussed before, the expected benefit without additional information is the RP shown in (1). Thus, the key is to give the WS value in the imperfect information setting.

    Obviously, in the perfect information setting, the benefit of the information can be evaluated by making the HN and recourse decisions based on the perfect information. The WS value in such a setting can be obtained by (2). However, when information is imperfect, the forecast and the final scenarios are usually different. In such a setting, decision-makers will of course adjust recourse decisions according to the realization. Thus, the WS value can be obtained by first making the optimal HN decisions with the forecasting scenario and then by making the best recourse decisions with the realization. In this context, the calculation of the two-stage decisions relies on two sequential optimization problems. Next, formulate the WS value and incorporate the information inaccuracy into the evaluation.

    Based on the above analysis, at first, focus on how to make the best decisions for the forecast. Specifically, with the forecast si, the optimal HN decisions, denoted as x(si), will be given by minimizing the two-stage cost as the following linear program:

    x(si)=argminx(si)X(cTx(si)+miny(si)Y(x(si),si)q(si)Ty(si)). (5)

    It is worth noting that both the HN and recourse decisions are given based on the forecast scenario si. Thus, the inner minimization problem in (5) is an estimation of the recourse cost under the forecast scenario si. In this way, the value of this additional information can be captured.

    Next, the real scenario, denoted as s, is revealed. The decision-maker will certainly adjust the recourse decisions, denoted as y(si,s), to optimize the recourse cost under the given x(si) and s. As such, we have the following linear program:

    y(si,s)=argminy(si,s)Y(x(si),s)q(s)Ty(si,s), (6)

    where the feasible space of y(si,s) is dominated by the HN decisions x(si) and the realization s.

    Thus, in terms of the forecast scenario si and the final realization s, the above two sequential optimization models make the two-stage decisions respectively. The WS value, denoted as ¯WS(si,s), is the sum of the first-stage cost determined by x(si) and the recourse cost based on y(si,s).

    ¯WS(si,s)=cTx(si)+q(s)Ty(si,s).

    Moreover, since the final realization s is unknown in the first stage, use sj to represent the realization and the following formula can be obtained:

    ¯WS(si,sj)=cTx(si)+q(sj)Ty(si,sj). (7)

    Besides, the conditional probability λ(i|j) is introduced to indicate the probability of the forecast scenario si occurring given the realization sj. Thus, for the given realization sj, the expected value of (7) can be expressed as siSλ(i|j)¯WS(si,sj) by considering all possible forecast scenario si. Furthermore, after taking all possible realization sj into account, the final expected WS value can be written as:

    ¯WS(λ)=sjSpjsiSλ(i|j)¯WS(si,sj), (8)

    where pj is the probability associated with the scenario sj.

    As is mentioned before, this paper focuses on the scarce-data setting, in which the probabilistic interdependent relationship λ(i|j) is hard to give. To address this issue, a robust way is adopted to identify the worst probabilistic relationship. Then, the robust WS value under a given IIR Γ, denoted as ¯WSR(Γ), can be given as:

    ¯WSR(Γ)=maxλ¯WS(λ)=maxλsjSpjsiSλ(i|j)¯WS(si,sj), (9a)
    s.t.               (5)(7)          siSλ(i|j)=1,   j, (9b)
    siS,sisjλ(i|j)Γ,   j, (9c)
    λ(i|j)0,   i,j. (9d)

    (9b) ensures that the sum of the conditional probabilities for each given realization is 1. (9c) specifies the limit of the IIR Γ. Thus, the ambiguity set defined by (9b) and (9c) characterizes all possible probabilistic structures, and the model identifies the worst case.

    Finally, two characteristics of model (9) are discussed below:

    At first, construct the ambiguity set (9b) and (9c) under the information imperfectness limit Γ to capture possible probabilistic relationships. Such ambiguity set constrained by a budget value is often used in robust optimization [42]. Thus, two benefits exist. One is that the exact probabilistic structure is not required here. The other one is that the price of information imperfectness can be explored by varying the budget value, or by comparing various information sources with different information imperfectness limits.

    Furthermore, the model is essentially a max-min-min problem with a bi-level structure. The objective (9a) defines the upper-level problem to identify the worst information update structure. The lower-level is composed of two sequential optimization problems, i.e., (5) and (6), due to the two-stage decision-making structure. However, bi-level programs are intractable [43]. To further solve the problem, we explore three ways to find its optimal solution.

    In this section, three approaches are developed to address model (9) at first, and then, the EVII concept is discussed.

    First, consider a scenario pair (si,sj) which is composed of the forecast scenario si and the realization sj. The value of ¯WS(si,sj), see (7), is irrelevant to λ(i|j). Thus, it can be given by solving (5) and (6) sequentially when ij, or only by (5) when i=j.

    All ¯WS(si,sj) can be obtained by enumerating each scenario pair (si,sj), which includes

    2|S|(|S|1)+|S|=2|S|2|S|

    linear programs. When all ¯WS(si,sj) is given, the solving of ¯WSR(Γ), see (9), is a linear program. In summary, solving problem (9) can be decomposed into a 2|S|2|S|+1 small-sized linear programs.

    Furthermore, when all ¯WS(si,sj) is given, (9) can also be solved by the following analytical way. To be specific, for each j, denote i=argmax1i|S|,ij¯WS(si,sj). In other words, si is the worst forecast scenario for the realization sj.

    Thus, to maximize (9), set λ(i|j) to be Γ, and λ(j|j) to be 1Γ in line with constraint (9b). As a result, the optimal solution and objective are as below:

    Proposition 1. Under (9) and the IIR Γ, the optimal λ(Γ) and the corresponding ¯WSR(Γ) can be given below:

    λ(Γ)={λ1j|S|(Γ):λ(i|j)=Γ;λ(i|j)=0,ii,ij;λ(j|j)=1Γ}, (10)

    and

    ¯WSR(Γ)=sjSpj(Γ¯WS(si,sj)+(1Γ)¯WS(sj,sj))=sjSpj(¯WS(sj,sj)+ΓGj), (11)

    where Gj=¯WS(si,sj)¯WS(sj,sj).

    The above proposition provides an analytical way to address problem (9) optimally. Thus, this way can be used in the costly information acquisition game study. Nevertheless, the pre-calculation of all ¯WS(si,sj) is still required. Next, a one-off way is presented.

    Finally, if the decisions of two-stage are linear, a robust counterpart of model (9) can be developed. To be specific, re-write problems (5) and (6) as their Karush-Kuhn-Tucker (KKT) conditions. For conciseness, introduce operator CovKKT(min(.)), which is a set of equalities and inequalities, to represent the KKT conditions of the linear program min(.). Thus, problems (5) and (6) can be equivalent reformulated as:

    CovKKT[minx(si)X(cTx(si)+miny(si)Y(x(si),si)q(si)Ty(si))], (12)

    and

    CovKKT[miny(si,sj)Y(x(si),sj)q(sj)Ty(si,sj)], (13)

    respectively. In (13), x(si) is the optimal HN decision of (5), which can also be determined by (12). Then, the equivalence can be yielded:

         ¯WSR(Γ)=maxλ,x,ysjSpjsiSλ(i|j)¯WS(si,sj)s.t.               (7),(9b)(9d),(12),(13).

    In the objective of this model, λ(i|j)¯WS(si,sj) is a bi-linear term, which makes the model intractable. Nevertheless, the following tractable equivalence can be given by introducing auxiliary variables and the detailed proof is presented in the appendix.

    Proposition 2. Model (9) can be equivalently reformulated as below:

    ¯WSR(Γ)=maxx,y,γ,ρsjSpjsiSΓρij, (14a)
    s.t.       siSγij=1,   j, (14b)
    ρij¯WS(si,sj),   i,j, (14c)
    ρijγijM,   i,j, (14d)
    γij{0,1},   i,j,(7),(12),(13), (14e)

    where γij is the binary auxiliary variable, ρij is the linear auxiliary variable, and M is a number big enough.

    Notice that it is unnecessary to calculate ¯WS(si,sj) in (14c) in advance, for the optimal decisions involved in ¯WS(si,sj) are directly given by (12) and (13).

    In summary, problem (9) can be optimally solved through the three approaches. Both the first and the second approach can address the problems with linear or integer decision variables, while the second one gives the closed-form solution. However, these two approaches can only be adopted to solve a series of small-sized problems. The third approach provides a one-off way but can only address linear decision variables.

    The robust WS value can be given by the aforementioned approaches, while the RP can be given by (1). Thus:

    EVII=max{RP¯WSR(Γ),0}, (15)

    which specifies the worst-expected gain that a decision-maker can obtain in the two-stage SP-based manner from an imperfect information source under the IIR Γ.

    Next, the following are discussed.

    At first, the perfect information setting is a special case in our study. To be specific, when Γ=0, it is the perfect information setting. The robust WS value is given from (11), i.e., ¯WSR(Γ=0), is equivalent to the WS value under the perfect information setting, i.e., (2). Then, EVII=EVPI. In the perfect information setting, it is proved that EVPI=RPWS since RPWS [40]. When imperfect information exists, however, it is evident that ¯WS(si,sj)¯WS(sj,sj). Thus,

    ¯WS(λ)=sjSpj(siSλ(i|j)¯WS(si,sj))sjSpj(siSλ(i|j)¯WS(sj,sj))=sjSpj¯WS(sj,sj)=WS

    Furthermore, from (9), with the increase of Γ, the feasible space enlarges. Thus, the ¯WSR value is non-decreasing, and then, the EVII value is non-increasing with Γ. It means that the information imperfectness can deteriorate the EVOI. Hence, a natural question is when the imperfect information is worthless. To answer this question, the following model is developed:

    min0Γ1Γ, (16a)
    s.t.      RP¯WSR(Γ). (16b)

    This linear programming model can help to identify the minimal Γ, denoted as Γ, which makes the imperfect information useless. In other words, when the information imperfectness extent is bigger than Γ, the additional information is unnecessary, and thus, RP solution will be better.

    In this section, a costly information acquisition game is designed between a decision-maker (hereafter "he") and an imperfect information provider (hereafter "she"), and a dedicated cooperation mechanism is developed.

    Assume that the information provider, with the historical IIR Γ, provides information to the decision-maker, who adopts the two-stage SP. The latter applies a win-win mechanism to motivate the provider to improve her information quality to ΓΔΓ. We have Γ>ΓΔΓ0. The first inequality holds because, by (16), one can assume that only when the provider's IIR is better than Γ, the cooperation will be considered.

    The cost of the provider has two parts: τ(1Γ) (τ>0) and τ1ΔΓ+τ2ΔΓ2 (τ1>0, τ2>0). The first is the sunk cost associated with the existing IIR Γ. Because perfect information is hard to achieve, the closer to the perfectness of the information improvement is, the more cost is required. Thus, use the quadratic form to capture the increasing incremental cost property for every amount of information quality enhancement. In practical environments, e.g., demand forecast of new products, ΔΓ can be estimated by the enlargement extent of the market survey scale.

    Thus, before the improvement practice, the EVII of the decision-maker dependent on SP is:

    EVII(Γ)=max{RP¯WSR(Γ),0}=RP¯WSR(Γ).

    The first equality is given by (15), while the second one holds due to Γ>Γ. Similarly, when the provider updates her IIR to ΓΔΓ, the corresponding EVII equals:

    EVII(ΓΔΓ)=max{RP¯WSR(ΓΔΓ),0}=RP¯WSR(ΓΔΓ).

    Let π0p and π0d be the costs of the information provider and the decision-maker respectively, without the information quality improvement practice, πp and πd be the costs of two players respectively, with the implementation of the practice. Thus:

    {π0p=τ(1Γ)θ,π0d=θ+¯WSR(Γ)RP, (17)

    where θ(EVII) is the payoff from the decision-maker to the provider for her information with the IIR Γ.

    {πp=τ(1Γ)+τ1ΔΓ+τ2ΔΓ2ν,πd=ν+¯WSR(ΓΔΓ)RP, (18)

    where ν is the payment that the decision-maker offers to the provider for her IIR ΓΔΓ.

    Here, we focus on information quality improvement in the costly information acquisition game. The global optimum is derived first, and then, a mechanism is designed by specifying ν to realize the optimum.

    In this section, the global optimum is figured out in the first place and it is then set as the benchmark. To get it, discard the payment ν and consider the centralized way. Thus, from (18), the global optimization problem can be written as:

    min0ΔΓΓ(πd+πp)=min0ΔΓΓ(τ(1Γ)+τ1ΔΓ+τ2ΔΓ2+¯WSR(ΓΔΓ)RP)=τ(1Γ)RP+min0ΔΓΓ(τ1ΔΓ+τ2ΔΓ2+maxλΛ(ΓΔΓ)¯WS(λ)), (19)

    where Λ(ΓΔΓ)={λ:iλ(i|j)=1,ijλ(i|j)ΓΔΓ,   j;λ0;λR|S|×|S|}.

    In the robust setting, (19) is a min-max-min-min problem. To be specific, the objective function is in the form of min-max, while the constraints involve two sequential minimization problems, i.e., (5) and (6). To address its intractability, use (11) in Proposition 1 to replace the inner maximization problem in (19). Thus, re-write (19) in the following equivalent way:

    min0ΔΓΓ(πd+πp)=τ(1Γ)RP+min0ΔΓΓ{τ1ΔΓ+τ2ΔΓ2+sjSpj[¯WS(sj,sj)+(ΓΔΓ)Gj]}. (20)

    Thus, the optimal results of model (20) are given in the following proposition, and its proof is presented in the appendix.

    Proposition 3. The optimal solution to model (20), denoted as ΔΓc, should be:

    ΔΓc=min{(sjSpjGjτ12τ2)+,Γ} (21)

    where a+=max{a,0}.

    Next, focus on the problem of when an information quality improvement practice is beneficial. To answer this question, we have:

    Corollary 1. Only when sjSpjGj>τ1, the information quality improvement practice is beneficial.

    Its proof is given in the appendix. The result coincides with that in Proposition 3. Specifically, when sjSpjGj>τ1, have ΔΓc>0, which means the provider will improve the information accuracy. Therefore, in the next section, only this case will be considered and the design of the coordination mechanism will be discussed. Furthermore, it is easy to give Corollary 2 to show how ΔΓc varies with τ1 and τ2.

    In this section, an information quality improvement compensation mechanism is developed for the costly information acquisition game. The Stackelberg game is adopted. To be specific, the decision-maker depending on SP is the leader who specifies the payment ν as a linear contract, i.e., α+βΔΓ, β>0. Thus, the leader's strategy is to determine α,β. The information provider is the follower who decides her information quality improvement, i.e., ΔΓ.

    The aim is to focus on the optimal design of the contract. Notice that (19) is a min-max-min-min problem, which is intractable. The mechanism design here is more complicated because the optimal decisions of the follower should be taken into account. However, by Proposition 1, the decision-maker's problem can be formulated in the following relatively concise form:

    minα,β>0πd(α,β)=minα,β>0(α+βΔΓ+sjSpj[¯WS(sj,sj)+(ΓΔΓ)Gj]RP), (22a)
    s.t.          πd(α,β)π0d, (22b)
    min0ΔΓΓπp(ΔΓ)π0p. (22c)

    (22b) and (22c) are the individual rationality constraints of the decision-maker and the information provider, respectively.

    Solve problem (22) by using backward induction. Start with the best response of the provider, denoted as ΔΓd. According to (18) and ν=α+βΔΓ, one can obtain the objective of the information provider:

    min0ΔΓdΓπp(ΔΓd)=min0ΔΓdΓ(τ(1Γ)+τ1ΔΓd+τ2ΔΓd2βΔΓdα).

    It is a quadratic function of ΔΓd. Similar to the proof of Proposition 3, her best response satisfies:

    ΔΓd=min{(βτ12τ2)+,Γ}. (23)

    Then, the optimal solution α,β of the Stackelberg game can be given in the following proposition. The proof is presented in the appendix.

    Proposition 4. The optimal strategy and cost of decision-maker can be given as follows:

    α,β=θτ2(ΔΓc)2,2τ2ΔΓc+τ1, (24)

    and

    πd(α,β)=θ+τ2ΔΓc2+τ1ΔΓc+sjSpj[¯WS(sj,sj)+(ΓΔΓc)Gj]RP. (25)

    From Propositions 3 and 4, re-write β as:

    β=2τ2ΔΓc+τ1=2τ2min{(sjSpjGjτ12τ2)+,Γ}+τ1.

    Thus, (23) yields

    ΔΓd=ΔΓd(β)=min{(βτ12τ2)+,Γ}=min{(2τ2min{(sjSpjGjτ12τ2)+,Γ}+τ1τ12τ2)+,Γ}=min{(min{(sjSpjGjτ12τ2)+,Γ})+,Γ}=min{min{(sjSpjGjτ12τ2)+,Γ},Γ}=min{(sjSpjGjτ12τ2)+,Γ}=ΔΓc. (26)

    Hence, α,β in (24) can induce the response of the information provider, i.e., ΔΓd, which is the same as ΔΓc in the global optimum. In other words, the proposed linear contract can help to realize the global optimum.

    In this section, a two-stage production and shipment planning problem is introduced to show the applicability of our study.

    Suppose a decision-maker, who uses two-stage SP, has |M| warehouses and |D| demand points. The decisions occur in two stages. In the first stage, he decides xm (0) of units of product to produce and store at warehouse m (M) at a unit cost of c1. Next, the sales season begins and a demand scenario s is realized in which the demand of location d (D) is denoted as nsd. To satisfy nsd, he can ship ysmd (0) of units of product from warehouse m at a unit cost of cmd. If a shortage occurs, he needs to place an additional order ysm (0) of units of product to replenish the stock of warehouse m at a cost of c2 (>c1) per unit. Thus, the two-stage SP model considered by the decision-maker is given as:

          minxz=mMc1xm+Es[miny(mMc2ysm+mMdDcmdysmd)]s.t.               mMysmdnsd,   dD,               dDysmdxm+ysm,   mM,              xm,ym,ymd0,   mM,   dD. (27)

    Consider a case with |M|=3 and |D|=10. The discrete priori demand scenarios and the shipment costs are given in Tables 2 and 3. Let c1 = 500 and c2 = 800.

    Table 2.  Demand scenarios.
    Scenarios Demand locations Priori
    probabilities
    d1 d2 d3 d4 d5 d6 d7 d8 d9 d10
    s1 1025 1086 1408 2792 2798 2961 3174 3213 3777 4340 0.08
    s2 1241 1746 1815 2106 2544 2684 2878 3679 3807 4620 0.11
    s3 1259 1907 2048 2092 2662 2701 3234 3495 4766 4907 0.11
    s4 1049 1188 2417 2437 2788 3215 3744 3809 4879 5144 0.16
    s5 1916 2502 2890 2754 2890 2990 3131 3618 4568 4960 0.19
    s6 1564 1669 2492 2619 3521 3662 4138 4563 4899 5210 0.14
    s7 2204 2377 2954 3165 3676 3940 4374 4930 5034 5423 0.12
    s8 2334 2466 3215 3468 4122 4735 4923 5221 5378 5734 0.09

     | Show Table
    DownLoad: CSV
    Table 3.  Unit shipment cost from the warehouses to the locations.
    d1 d2 d3 d4 d5 d6 d7 d8 d9 d10
    m1 80 40 60 10 50 24 40 35 30 55
    m2 50 28 48 50 10 65 25 40 37 45
    m3 69 54 30 55 45 43 25 25 30 10

     | Show Table
    DownLoad: CSV

    The models are optimally calculated on a notebook PC with a 2.3 GHz CPU and 8 GB memory, by using Python and Doc plex.

    (1)–(3) can deliver the results in the perfect information setting: WS = 2349542.87, RP = 2475020.77, and EVPI = 125477.9.

    Next, consider the information imperfectness in the scarce-data setting. Assume that the decision-maker only has the priori distribution, not the probabilistic structure. By (5)–(7), all ¯WS(si,sj) is presented in Table 4, in which si=sj represents the corresponding perfect information results.

    Table 4.  The computational results of ¯WS(si,sj).
    s1 s2 s3 s4 s5 s6 s7 s8
    s1 1895947 1982349 2178885 2330649 2516421 2714505 3115136 3474478
    s2 1923247 1956639 2162505 2314269 2500041 2698125 3098756 3458098
    s3 2020797 2054189 2103975 2257790 2441511 2639595 3040226 3399568
    s4 2100747 2134139 2183925 2207769 2395731 2591625 2992256 3351598
    s5 2178197 2211589 2261375 2286066 2347071 2545155 2945786 3305128
    s6 2283647 2317039 2366825 2390669 2452521 2481885 2882516 3241858
    s7 2471097 2504489 2554275 2578119 2639971 2669335 2770046 3129388
    s8 2647047 2680439 2730225 2754069 2815921 2845285 2945996 3023818

     | Show Table
    DownLoad: CSV

    Use the approaches proposed in Section 3.3, the ¯WSR(Γ) and the EVII under different Γ in Figure 1 can be obtained. Moreover, by model (16), Γ ( = 0.242) can be identified, which makes the information worthless.

    Figure 1.  The EVII under different Γ.

    This section focuses on the game setting based on the above example.

    a. The decentralized setting

    A dedicated coordination mechanism is required here. In the decentralized setting, the costs of the information provider and the decision-maker are given by (17), in which the transfer from the decision-maker to the provider is specified by θ. It can be a fixed value that is irrelevant to the original information quality Γ. Nevertheless, a more reasonable way is to define θ based on the fixed ratio of the benefit-sharing. For example, define θ=0.6×EVII, which means that the decision-maker should pay the provider 60% of the benefits for her information.

    Obviously, if there is no extra compensation, the provider will not improve her information accuracy. Thus, in the information quality improvement setting, we still consider the fixed benefit-sharing way and study whether it works or not. Accordingly, the objectives of the two participants can be modified, i.e., (18), as below.

    {π1p=τ(1Γ)θ+τ1ΔΓ+τ2ΔΓ2,π1d=θ+¯WSR(Γ)RP.

    where θ=0.6×EVII=0.6×(RP¯WSR(ΓΔΓ)) is the fixed share of the gain of the improved information. Moreover, set τ=2×105, τ1=3×105, τ2=106 and Γ=0.16. The other parameters are set to be the values given in the last section. Then, give π1p, π1d, and the total cost under different ΔΓ is shown as below.

    Figure 2 indicates that the fixed benefit-sharing way cannot help to achieve the global optimum. To be specific, the optimal choice of the information provider is ΔΓ=0.005, while the globally optimal ΔΓ should be 0.109. Thus, a dedicated coordination mechanism is required.

    Figure 2.  The costs under different ΔΓ with the fixed benefit-sharing.

    b. The linear compensation mechanism

    Still consider the example given in Section 5.2.1.

    First, by Corollary 1, τ1<sjSpjGj=517929.2. Then, Proposition 3 yields Figure 3, which indicates the value of ΔΓc w.r.t. τ1 and τ2 under different Γ. To be specific, the straight line specified by each Γ divides the figure into two parts. When (τ1, τ2) lies in the lower-left or upper-right area, ΔΓc=Γ or sjSpjGjτ12τ2, respectively. In addition, it is found that the lower-left corner shrinks as Γ increases. It means that ΔΓc can be Γ only when (τ1, τ2) is very small. This is because higher Γ means lower information quality, which brings more difficulties to realize the perfect information situation.

    Figure 3.  The results of ΔΓc w.r.t. τ1 and τ2 under different Γ.

    In Figure 4, the black columns show the case ΔΓc=Γ, which is the lower-left corner in Figure 3. When τ1 and τ2 become big, it is costly to improve the information quality. Thus, the upper extent of information quality improvement is becoming smaller. The changing trend can also be explained by Corollary 2. Moreover, the column under τ1=3×105 and τ2=106 has the corresponding ΔΓc=0.109. It is the improvement extent that makes the global optimum happen, as is shown in Figure 2. Thus, the proposed compensation mechanism can help to realize the global optimum.

    Figure 4.  The changing of ΔΓc under different τ1 and τ2 (Γ=0.16).

    Next, the focus is on the optimal compensation contract, i.e., α,β. First, from Propositions 3 and 4, re-write α,β as below:

    α,β={θ(sjSpjGjτ1)24τ2,sjSpjGj,0<sjSpjGjτ12τ2<Γθτ2Γ2,2τ2Γ+τ1,otherwise.

    The formulas show there are two cases for α and β. The condition differentiating two cases coincides with that in Figure 3. In each case, α and β exhibit distinct changing trends as is shown in Figure 5, in which the first case is specified by the grey columns and the second one by the black area.

    Figure 5.  The changing of α,β under different τ1 and τ2 (Γ=0.16).

    It is found that, when both τ1 and τ2 are small, the compensation is mainly dominated by the fixed term of the contract, i.e., α. Then, with the increase of τ1 and τ2, α decreases linearly with τ2 but is irrelevant to τ1, while β increases linearly with τ1 and τ2. Thus, β, the parameter associated with ΔΓ, gradually dominates the contract. Furthermore, when both τ1 and τ2 are big enough and lie in the grey area, the improvement activity requires more investments. Thus, β remains high, and α gradually increases with τ1 and τ2 to strengthen the compensation.

    Finally, observe the objective values of the players. Since the information improvement cost of the provider can be compensated by the contract, we only focus on the benefit of the decision-maker, i.e., πd(α,β), in Figure 6. It is found that, with the increase of τ1 and τ2, his benefit will significantly decrease. It is because the improvement activity costs much more, and then, incurs higher compensation.

    Figure 6.  The changing of πd(α,β) under different τ1 and τ2 (Γ=0.16).

    So far, we have shown how our study can be applied to a practical OR/MS problem. The whole process demonstrates how information quality influences the two-stage SP decision-making performance, and gives the boundary IIR value. Efforts have also been made to address the two-stage SP problem with exogenous costly information acquisition. Our study can be applied to general interactions between information management and stochastic decision-making.

    In this study, a fundamental question is explored: that is to evaluate and acquire imperfect information in the two-stage SP setting with the challenge of data scarcity. To evaluate the EVII, we propose the robust WS concept, which is modeled by a max-min-min problem with the bi-level structure. To find the optimal solution, three ways are developed, including numerical, analytical, and equivalent reformulation, ensuring that the solution is suitable for different settings. Thus, the EVII can be obtained and when imperfect information is worthless can be identified. Furthermore, a Stackelberg game is modeled to study the coordination of the costly information acquisition process between decision-maker, who utilizes the two-stage SP, and the information provider. To realize the global optimum, a linear contract is designed for the decision-maker relying on SP to compensate for the information provider's efforts in information quality improvement. Finally, a two-stage production and shipment model is introduced and our study's effectiveness is validated. Therefore, we provide a novel and unified model to study the interactions between information management and SP in the scarce-data setting.

    The following is suggested for future studies. First, in this paper, the budget value is used to capture the information imperfectness in the scarce-data setting. Although such a way is popular in robust optimization, e.g., Bertsimas and Sim [42], sometimes, it may lead to over-conservative results. Exploring new approaches is necessary for future studies. For example, the chance-constrained method can be used, and thus, the IIR constraint can be satisfied with a certain probabilistic level. However, the computationally tractable safe approximation of such a chance constraint is often expressed as the conic quadratic constraint [44]. Therefore, further study is needed to discuss how to combine it with both the max-min-min problem and the costly information acquisition game. Second, this paper focuses on imperfect information by considering the misinformation probabilities among scenarios. However, in the scenarios, the values associated with the uncertain parameters are ignored in this paper. Thus, a natural extension is to take both the misinformation probabilities and the values associated with the scenarios into account together in the future. Third, it is interesting to extend our study into the predictive context and discuss the value of forecasting methods. For example, the K-Nearest-Neighbor (KNN) technique can predict a set of scenarios. What is the value of this technique from the perspective of the decision-maker adopting SP? To answer this question, probabilistic estimation should be taken into account. This extension is worth conducting because it integrates the predictive decisions into the prescriptive ones in the imperfect information setting. Finally, given that many decision-makers are not risk-neutral, a possible future direction is to encompass non-linear risk preferences. Our study provides the complete analysis framework and solid research foundation for these directions.

    The authors are grateful for the valuable comments from the academic editor and the two anonymous reviewers. This study is supported by the National Natural Science Foundation of China (No. 71832001), the Natural Science Foundation of Shanghai (No. 20ZR1401900), and the Fundamental Research Funds for the Central Universities (No. 2232018H-07).

    All authors declare no conflicts of interest in this paper.

    Proof of Proposition 2:

    At first, linearize the bi-linear terms, i.e., λ(i|j)¯WS(si,sj), in the objective (9a). Re-call Proposition 1, the optimal value of λ(i|j) should be 0 or Γ and there is only one λ(i|j)=Γ for each sj. Thus, introduce a binary variable γij and use Γγij to replace λ(i|j). Then:

    ¯WSR(Γ)=maxx,y,γsjSpjsiSΓγij¯WS(si,sj) (A1)
    s.t.               siSγij=1,   j (A2)
    γij{0,1},   i,j(7),(12),(13). (A3)

    Then, a linear auxiliary variable ρij is introduced to replace γij¯WS(si,sj). Therefore, the above model can be linearized as below:

    ¯WSR(Γ)=maxx,y,γ,ρsjSpjsiSΓρij (A4)
    s.t.          ρij¯WS(si,sj),   i,j (A5)
    ρijγijM,   i,j   (A2),(A3),(7),(12),(13). (A6)

    where M is a number big enough. Notice that ¯WS(si,sj)0. Constraints (A4) and (A5) enforce ρij=0 if γij=0 and ρij=¯WS(si,sj) if γij=1. Thus, Proposition 2 is obtained.

    Proof of Proposition 3:

    For the quadratic function (20), it is easy to get that the optimal ΔΓ, i.e., ΔΓc, should satisfy:

    ΔΓc={0,if sjSpjGjτ12τ20,sjSpjGjτ12τ2,if 0<sjSpjGjτ12τ2Γ,Γ,else. (A7)

    For succinctness, it could be re-written as

    ΔΓc=min{(sjSpjGjτ12τ2)+,Γ}, (A8)

    where a+=max{a,0}. This proves Proposition 3.

    Proof of Corollary 1:

    If the information quality improvement is beneficial, by (17) and (18), we have:

    0>(πd+πp)(π0d+π0p)
    =(τ(1Γ)+τ1ΔΓ+τ2ΔΓ2+¯WSR(ΓΔΓ)RP)(τ(1Γ)+¯WSR(Γ)RP)
    =τ1ΔΓ+τ2ΔΓ2+¯WSR(ΓΔΓ)¯WSR(Γ)
    =ΔΓ(τ1+τ2ΔΓsjSpjGj).

    Focus on ΔΓ>0, τ1>0 and τ2>0, thus:

    sjSpjGjτ1τ2>ΔΓ>0.

    As a result, it is obtained that sjSpjGj>τ1. Then, Corollary 1 is proved.

    Proof of Proposition 4:

    First, re-write the best response of the information provider, i.e., (23) as below:

    ΔΓd={0,if βτ12τ2<0,βτ12τ2,if 0βτ12τ2<Γ,Γ,if Γβτ12τ2. (A9)

    Introduce the three ΔΓd into the provider's individual rationality constraint (22c), and have:

    {αθ,if βτ12τ2<0,α+(βτ1)24τ2θ,if 0βτ12τ2<Γ,ατ2Γ2+(βτ1)Γθ,if Γβτ12τ2. (A10)

    To ensure the satisfaction of the provider's individual rationality constraint (22c), (A10) should hold. Notice that the focus is on the setting of the information quality improvement, in which ΔΓd>0. Thus, only the last two cases in (A10) will be taken into account. Accordingly, problem (22) can be reformulated into the following two sub-models.

    Case 1:

    In this case, 0βτ12τ2<Γ and α+(βτ1)24τ2θ with the best response ΔΓd=βτ12τ2. Thus, problem (22) can be reformulated as:

                        minα,β>0πd(α,β)=     minα,β>0{α+β(βτ1)2τ2+sjSpj[¯WS(sj,sj)+(Γβτ12τ2)Gj]RP},s.t.               α+β(βτ1)2τ2sjSpjGjβτ12τ2θ,               τ1β<2τ2Γ+τ1,               αθ(βτ1)24τ2. (A11)

    Obviously, because the objective is linear w.r.t. α, from the last constraint, we have the optimal solution for above satisfies α1=θ(βτ1)24τ2. Use it to replace α in model (A11), then:

    minα,β>0πd(α,β)=
    minβ>0{θ+14τ2β2sjSpjGj2τ2βτ124τ2+sjSpj[¯WS(sj,sj)+(Γ+τ12τ2)Gj]RP},

    s.t.

    β2sjSpjGjτ1,
    τ1β2τ2Γ+τ1.

    Obviously, to ensure the existence of feasible solutions, it is required that 2sjSpjGjτ1τ1. Recall the setting of sjSpjGj>τ1 as given in COROLLARY 1, the optimal solution of (A11) exists. Moreover, since the objective is a quadratic function of β, it is easy to find the optimal β1 satisfies:

    β1={sjSpjGj,ifτ1<sjSpjGj2τ2Γ+τ1,2τ2Γ+τ1,else.

    Thus, we have:

    α1,β1={θ(sjSpjGjτ1)24τ2,sjSpjGj,if τ1<sjSpjGj2τ2Γ+τ1θτ2Γ2,2τ2Γ+τ1,else, (A12)

    with the objective:

    (A13)

    Case 2:

    Here, Γβτ12τ2 and ατ2Γ2+(βτ1)Γθ with the response ΔΓd=Γ. Thus, the variant of (22) can be given as below:

                    minα,β>0πd(α,β)=minα,β(α+βΓ+sjSpj¯WS(sj,sj)RP)s.t.            α+βΓ+sjSpj¯WS(sj,sj)θ+sjSpj[¯WS(sj,sj)+ΓGj],                                β2τ2Γ+τ1,                        α+βΓθ+τ1Γ+τ2Γ2. (A14)

    Obviously, form the last two constraints, the optimal solution of (A14) should satisfies β2=2τ2Γ+τ1 and α2+β2Γ=θ+τ1Γ+τ2Γ2. Then, α2=θτ2Γ2. Accordingly, the first constraint can be re-written as:

    τ1Γ+τ2Γ2ΓsjSpjGj.

    Recall that when Γ>0, τ1+τ2ΓsjSpjGj. When this inequality holds, the optimal solution of (A14) is

    α2,β2=θτ2Γ2,2τ2Γ+τ1, (A15)

    with the objective

    πd(α2,β2)=θ+τ1Γ+τ2Γ2+sjSpj¯WS(sj,sj)RP. (A16)

    In summary, with the above two cases combined, the optimal solution of problem (22) satisfies:

    α,β={θ(sjSpjGjτ1)24τ2,sjSpjGj,if τ1<sjSpjGj2τ2Γ+τ1,θτ2Γ2,2τ2Γ+τ1,else, (A17)

    with the optimal value of the objective

    (A18)

    Furthermore, recall (21), reformulate the optimal solution α,β and the objective πd(α,β) into the following succinct forms:

    α,β=θτ2(ΔΓc)2,2τ2ΔΓc+τ1,sjSpjGjτ1. (A19)

    And

    πd(α,β)=θ+τ2ΔΓc2+τ1ΔΓc+sjSpj[¯WS(sj,sj)+(ΓΔΓc)Gj]RP,sjSpjGjτ1 (A20)

    Thus, the proof of Proposition 4 is completed.



    [1] N. Unnikrishnan, B. Vineshkumar, Reversed percentile residual life and related concepts, J. Korean Stat. Soc., 40 (2011), 85–92. https://doi.org/10.1016/j.jkss.2010.06.001 doi: 10.1016/j.jkss.2010.06.001
    [2] M. Shafaei Noughabi, S. Izadkhah, On the quantile past lifetime of the components of the parallel systems, Commun. Stat. Theory Meth., 45 (2016), 2130–2136. https://doi.org/10.1080/03610926.2013.875573 doi: 10.1080/03610926.2013.875573
    [3] M. Shafaei Noughabi, Solving a functional equation and characterizing distributions by quantile past lifetime functions, Econ. Qual. Control, 31 (2016), 55–58. https://doi.org/10.1515/eqc-2015-0017 doi: 10.1515/eqc-2015-0017
    [4] M. Mahdy, M. Further results involving percentile inactivity time order and its inference, Metron, 72 (2014), 269–282. https://doi.org/10.1007/s40300-013-0017-9 doi: 10.1007/s40300-013-0017-9
    [5] L. Balmert, J. H. Jeong, Nonparametric inference on quantile lost lifespan, Biometrics, 73 (2017), 252–259. https://doi.org/10.1111/biom.12555 doi: 10.1111/biom.12555
    [6] L. C. Balmert, R. Li, L. Peng, J. H. Jeong, Quantile regression on inactivity time, Stat. Meth. Med. Res., 30 (2021), 1332–1346. https://doi.org/10.1177/0962280221995977 doi: 10.1177/0962280221995977
    [7] A. M. Teamah Abd-Elmonem, A. Elbanna, A. M. Gemeay, Frechet-Weibull mixture distribution: properties and applications, Appl. Math. Sci., 14 (2020), 75–86. https://doi.org/10.12988/ams.2020.912165 doi: 10.12988/ams.2020.912165
    [8] I. Elbatal, S. M. Alghamdi, F. Jamal, S. Khan, E. M. Almetwally, M. Elgarhy, Kavya-manoharan weibull-g family of distributions: statistical inference under progressive type-ii censoring scheme, Adv. Appl. Stat., 87 (2023), 191–223. https://doi.org/10.17654/0972361723034 doi: 10.17654/0972361723034
    [9] C. F. Chung, Confidence bands for percentile residual lifetime under random censorship model, J. Multivariate Anal., 29 (1989), 94–126. https://doi.org/10.1016/0047-259X(89)90079-1 doi: 10.1016/0047-259X(89)90079-1
    [10] M. D. Burke, S. Csorgo, L. Horvath, Strong approximations of some biometric estimates under random censorship, Z. Wahrsch. Verw. Gebiete, 56 (1981), 87–112.
    [11] S. Csorgo, L. Horvath, The rate of strong uniform consistency for the product-limit estimator, Z. Wahrsch. Verw. Gebiete, 62 (1983), 411–426.
    [12] E. E. Aly, M. Csorgo, L. Horvath, Strong apprroximations of the quantile process of the product-limit estimator, J. Multivariate Anal., 16 (1985), 185–210. https://doi.org/10.1016/0047-259X(85)90033-8 doi: 10.1016/0047-259X(85)90033-8
    [13] M. Csorgo, S. Scorgo, Estimation of percentile residual life, Oper. Res., 35 (1987), 598–606. https://doi.org/10.1287/opre.35.4.598 doi: 10.1287/opre.35.4.598
    [14] J. R. Blum, V. Susarla, Maximal deviation theory of density and failure rate function estimates based on censored data, Multivariate Anal., 5 (1980), 213–222.
    [15] M. D. Burke, L. Horvath, Density and failure rate estimation in competing risks model, Sankhya A., 46 (1982), 135–154. https://doi.org/10.1016/j.ress.2010.04.006 doi: 10.1016/j.ress.2010.04.006
    [16] J. Mielniczuk, Some asymptotic properties of kernel estimators of a density function in case of censored data, Ann. Statist., 14 (1986), 766–773. https://doi.org/10.1214/aos/1176349954 doi: 10.1214/aos/1176349954
    [17] J. S. Marron, W. J. Padgett, Asymptotically optimal bandwidth selection for kernel density estimators from randomly right-censored samples, Ann. Statist., 15 (1987), 1520–1535. https://doi.org/10.1214/aos/1176350607 doi: 10.1214/aos/1176350607
    [18] S. H. Lo, Y. P. Mack, J. L. Wang, Density and hazard rate estimation for censored data via strong representation of the Kaplan-Meier estimator, Probab. Theory Related Fields, 80 (1989), 461–473.
    [19] T. R. Fleming, D. P. Harrington, Counting processes and survival analysis, New York: Wiley, 1991. https://doi.org/10.1002/9781118150672
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1010) PDF downloads(46) Cited by(4)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog