Processing math: 22%
Research article Special Issues

Numerical method for a compound Poisson risk model with liquid reserves and proportional investment

  • These authors contributed equally to this work.
  • In this paper, a classical risk model with liquid reserves and proportional investment is considered, and the expected total discounted dividend before ruin of insurance companies under the threshold dividend strategy is studied. First, the integral differential equations of the expected total discounted dividend before ruin satisfying certain boundary conditions is derived. Second, since the explicit solutions of the equations cannot be obtained, the numerical approximation solutions are obtained by the sinc approximation method. Finally, we discuss the effects of parameters such as risk capital ratio and liquid reserve on the expected total discounted dividend before ruin by some examples.

    Citation: Chunwei Wang, Shujing Wang, Jiaen Xu, Shaohua Li. Numerical method for a compound Poisson risk model with liquid reserves and proportional investment[J]. AIMS Mathematics, 2024, 9(5): 10893-10910. doi: 10.3934/math.2024532

    Related Papers:

    [1] Dazhao Chen . Endpoint estimates for multilinear fractional singular integral operators on Herz and Herz type Hardy spaces. AIMS Mathematics, 2021, 6(5): 4989-4999. doi: 10.3934/math.2021293
    [2] Anas A. Hijab, Laith K. Shaakir, Sarah Aljohani, Nabil Mlaiki . Double composed metric-like spaces via some fixed point theorems. AIMS Mathematics, 2024, 9(10): 27205-27219. doi: 10.3934/math.20241322
    [3] Xianyong Huang, Shanhe Wu, Bicheng Yang . A Hardy-Hilbert-type inequality involving modified weight coefficients and partial sums. AIMS Mathematics, 2022, 7(4): 6294-6310. doi: 10.3934/math.2022350
    [4] M. Zakarya, Ghada AlNemer, A. I. Saied, H. M. Rezk . Novel generalized inequalities involving a general Hardy operator with multiple variables and general kernels on time scales. AIMS Mathematics, 2024, 9(8): 21414-21432. doi: 10.3934/math.20241040
    [5] Limin Yang, Ruiyun Yang . Some new Hardy-Hilbert-type inequalities with multiparameters. AIMS Mathematics, 2022, 7(1): 840-854. doi: 10.3934/math.2022050
    [6] Suriyakamol Thongjob, Kamsing Nonlaopon, Sortiris K. Ntouyas . Some (p, q)-Hardy type inequalities for (p, q)-integrable functions. AIMS Mathematics, 2021, 6(1): 77-89. doi: 10.3934/math.2021006
    [7] Chunyan Luo, Yuping Yu, Tingsong Du . Estimates of bounds on the weighted Simpson type inequality and their applications. AIMS Mathematics, 2020, 5(5): 4644-4661. doi: 10.3934/math.2020298
    [8] Sabir Hussain, Javairiya Khalid, Yu Ming Chu . Some generalized fractional integral Simpson’s type inequalities with applications. AIMS Mathematics, 2020, 5(6): 5859-5883. doi: 10.3934/math.2020375
    [9] Irshad Ayoob, Ng Zhen Chuan, Nabil Mlaiki . Hardy-Rogers type contraction in double controlled metric-like spaces. AIMS Mathematics, 2023, 8(6): 13623-13636. doi: 10.3934/math.2023691
    [10] Qing Yang, Chuanzhi Bai . Fixed point theorem for orthogonal contraction of Hardy-Rogers-type mapping on O-complete metric spaces. AIMS Mathematics, 2020, 5(6): 5734-5742. doi: 10.3934/math.2020368
  • In this paper, a classical risk model with liquid reserves and proportional investment is considered, and the expected total discounted dividend before ruin of insurance companies under the threshold dividend strategy is studied. First, the integral differential equations of the expected total discounted dividend before ruin satisfying certain boundary conditions is derived. Second, since the explicit solutions of the equations cannot be obtained, the numerical approximation solutions are obtained by the sinc approximation method. Finally, we discuss the effects of parameters such as risk capital ratio and liquid reserve on the expected total discounted dividend before ruin by some examples.



    A multi-attribute decision-making (MADM) tool plays an important role in the environment of fuzzy set theory and to evaluate unreliable and awkward information. According to Yukalov and Sornette [1], it is sometimes very much difficult to handle a huge part of data or information because of either the design of the data or due to the uncertainty involved in the data or information. Human beings have faced such issues for the last few decades. To oversee such issues, Zadeh [2] initiated the notion of fuzzy sets (FS), which takes the truth grade from [0, 1]. Moreover, Atanassove [3] modified the technique of FS and initiated the principle of intuitionistic FS (IFS) by including the falsity grade in FS, with the condition that the sum of both the grades must be less than or equal to 1. Sometimes, when a decision maker provides information, the principle of IFS failed, in which the sum of both the grades exceeds 1. For this situation, Yager [4] introduced the principle of Pythagorean FS (PFS), with the condition that instead of sum of both the grades, the sum of squares of both the grades must not exceed 1. Although with the introduction of PFS, decision makers have gained much more flexibility to give the values of both the grades but yet there can arise many situations, where PFS failed as even the sum of squares of both the grades exceeds 1. To handle this ambiguity, Yager [5] improved the rule of PFS and initiated the notion of q-rung orthopair FS (QROFS), with the rule that the sum of the q-powers (q 1) of the duplet must not exceeds 1. The QROFS has received massive attraction from many researchers and has utilized in different areas [6,7,8,9,10,11,12,13,14]. The mathematical portrayal of the IFS and their generalizations are clarified with the assistance of Figure 1.

    Figure 1.  Geometrical representation of IFS, PFS, QROFS.

    Ramot et al. [15] investigated the idea of complex FS (CFS) by including the unreal part in the truth grade, called complex truth grade, whose real and unreal parts belong to [0, 1]. Due to certain complications, Alkouri and Salleh [16] modified the principle of CFS by including the falsity grade in the CFS to elaborate the complex IFS (CIFS), with the rule that the sum of the real and unreal parts of the duplet must be limited to [0, 1]. Moreover, Ullah [17] utilized the idea of complex PFS (CPFS) by improving the rule of IFS to initiate a new rule for CPFS, with the condition that the sum of the squares of the real and unreal parts of the duplet must be limited to [0, 1]. Akram and Naz initiated decision making for CPFS [18]. Although CPFS has proved its effectiveness, there are many situations where it fails to perform, for instance, when a decision maker provides such sorts of information whose sum of the squares of the real and unreal parts exceeds from [0, 1]; for this, the complex QROFS (CQROFS) was developed by Liu et al. [19,20]. Furthermore, the CQROFS is portrayed by the interest degree and the non-participation degree, who's total q-powers of the genuine part (likewise for nonexistent part) are not by and large or comparable to 1. The CQROFS is much more generalized and helpful to the researchers of the field as compared to CPFS and CIFS. Mahmood and Ali [21,22] enhanced the effectiveness of the notion of CQROFS by studying the TOPSIS technique, correlation coefficients, and Maclaurin operators in the environment of CQROFS.

    In terms of the real-life importance of linguistic terms (LT), Zadeh [23] introduced the theory of LT sets (LTS) for evaluating the awkward and unreliable problems in real-life decisions. Furthermore, Herrera and Martinez [24] pioneered the theory of the 2-tuple LT (2TLT) set, which is more modified than the LTSs. Herrera and Martinez [25,26] utilized the theory of LTS in different fields. Li and Liu [27] initiated the theory of Heronian mean operators for 2TLT.

    DTRS is one of the capable methods to assess abnormal and convoluted data. Different researchers have adjusted it to various ways like loss function (LF) [28], trait decrease [29], and further developed models utilizing DTRS [30]. Yao [31] introduced the three-ways decision (3WD), which is changed from DTRS to adapt to practical choice issues. 3WD can isolate all widespread sets into three distinct parts: the positive area (POS(C)), the negative district (NEG(C)), and the limit locale (BND(C)). As a mix of DTRSs and the Bayesian choice method, the procedures of 3WD has successfully overseen numerous request issues. This speculation has been applied in various fields [32,33]. The graphical portrayal of the 3WD dependent on the harsh set is examined with the assistance of Figure 2.

    Figure 2.  Geometrical interpretation of the 3-WDs.

    Liu and Yang [34] introduced the misfortune work in 3WDs dependent on intuitionistic fuzzy linguistic sets. In view of the above examination, we initiated the following: we generalize the massive important techniques of three-way decisions (3WD) [35] and DTRS with Complex q-rung orthopair 2-tuple linguistic variable (CQRO2-TLV), to elaborate certain important properties. Moreover, the generalized Maclurin symmetric mean (GMSM) [36] is a dominant and more flexible method to determine the accuracy and dominancy of real life issues. Therefore, by considering the complex q-rung orthopair 2-tuple linguistic (CQRO2-TL) information and generalized Maclurin symmetric mean (GMSM), we present the CQRO2-TL GMSM (CQRO2-TLGMSM) operator and weighted CQRO2-TL GMSM (WCQRO2-TLGMSM) operator, and demonstrated their effective properties. Finally, a model is applied to exhaustively elucidate the proposed procedure, and the effects of different contingent probabilities on choice results are examined. The investigation of the elaborated approach is likewise discussed regarding certain ways to deal with the capability and capacity of the presented approach. For simplicity, the explored work for this manuscript is explained with the help of Figure 3.

    Figure 3.  Geometrical representations of the explored approach in this manuscript.

    This manuscript is outlined as follows. In Section 2, we briefly audit some helpful ideas of 2-TLF [24], converse 2-TLF [24], CQROFS [19,20,21,22,35], and their functional laws. Furthermore, the GMSM operators [36] are discussed. In Section 3, we explored CQRO2-TLV and their algebraic laws. In Section 4, we generalize 3WD and DTRS with CQRO2-TLV to elaborate certain important properties. GMSM is a dominant and more flexible way to determine the accuracy and dominancy of real life issues. Therefore, by considering the CQRO2-TL information and GMSM, in section 5 we present the CQRO2-TLGMSM operator and the WCQRO2-TLGMSM operator and demonstrated their effective properties. In Section 6, a model is applied to exhaustively elucidate the proposed procedure, and the effects of different contingent probabilities on choice results are examined. The conclusion of this manuscript is discussed in Section 7.

    For the convenience and improved understanding for the reader, in this section we will recall some basic notions which will be used in the manuscript. The symbols UNI,μQCQ(), and ηQCQ() will denote the universal set, the truth grade, and the falsity grade, respectively, where αSC,δSC,qSC1.

    Definition 2.1. [24] For a LS SLT={sSLT0,sSLT1,sSLT2,sSLT3,.,sSLTg} with βSC[0,1], the 2-TL function ΔLT is given by:

    ΔLT:[0,1]SLT×[12g,12g}, (2.1)
    ΔLT(βSC)=(sSLTj,αSC)with{sSLTjj=round(βSC,g)αSC=βSCjgαSC[12g,12g}. (2.2)

    The 2-TL inverse function Δ1LT is given by:

    Δ1LT:SLT×[12g,12g}[0,1], (2.3)
    Δ1LT(sSLTj,αSC)=jg+αSC=βSC. (2.4)

    Definition 2.2. [19] A CQROFS is given by

    QCQ={(μQCQ(),ηQCQ()):UNI} (2.5)

    where μQCQ()=μQCQRP()ei2π(μQCQIP()) and ηQCQ()=ηQCQRP()ei2π(ηQCQIP()),

    with the conditions: 0μqSCQCQRP()+ηqSCQCQRP()1,0μqSCQCQIP()+ηqSCQCQIP()1.

    Moreover, ζQCQ()=(1(μqSCQCQRP()+ηqSCQCQRP())1qSC)ei2π(1(μqSCQCQIP()+ηqSCQCQIP())1qSC) is called the refusal grade. The QROFN is given as

    QCQ=(μQCQ,ηQCQ)=(μQCQRPei2π(μQCQIP),ηQCQRPei2π(ηQCQIP)).

    Definition 2.3. [20] For any two CQROFNs

    QCQ1=(μQCQRP1ei2π(μQCQIP1),ηQCQRP1ei2π(ηQCQIP1)) and

    QCQ2=(μQCQRP2ei2π(μQCQIP2),ηQCQRP2ei2π(ηQCQIP2)), then

    (1). QCQ1CQQCQ2=((μqCQQCQRP1+μqCQQCQRP2μqCQQCQRP1μqCQQCQRP2)1qCQei2π(μqCQQCQIP1+μqCQQCQIP2μqCQQCQIP1μqCQQCQIP2)1qCQ,ηQCQRP1ηQCQRP2ei2π(ηQCQIP1ηQCQIP2));

    (2). QCQ1CQQCQ2=(μQCQRP1μQCQRP2ei2π(μQCQIP1μQCQIP2),(ηqCQQCQRP1+ηqCQQCQRP2ηqCQQCQRP1ηqCQQCQRP2)1qCQei2π(ηqCQQCQIP1+ηqCQQCQIP2ηqCQQCQIP1ηqCQQCQIP2)1qCQ);

    (3). QδSCCQ1=(μδSCQCQRP1ei2π(μδSCQCQIP1),(1(1ηqCQQCQRP1)δSC)1qCQei2π(1(1ηqCQQCQIP1)δSC)1qCQ);

    (4). δSCQCQ1=((1(1μqCQQCQRP1)δSC)1qCQei2π(1(1μqCQQCQIP1)δSC)1qCQ,ηδSCQCQRP1ei2π(ηδSCQCQIP1)).

    Definition 2.4. [21] For any two CQROFNs

    QCQ1=(μQCQRP1ei2π(μQCQIP1),ηQCQRP1ei2π(ηQCQIP1)) and

    QCQ2=(μQCQRP2ei2π(μQCQIP2),ηQCQRP2ei2π(ηQCQIP2)), the score and accuracy functions are given by:

    Ş(QCQ1)=(μqSCQCQRP1ηqSCQCQRP1+μqSCQCQIP1ηqSCQCQIP1)2, (2.6)
    Ȟ(QCQ1)=(μqSCQCQRP1+ηqSCQCQRP1+μqSCQCQIP1+ηqSCQCQIP1)2. (2.7)

    Based on the two above notions, the compassion between two CQROFNs is given by:

    (1). If Ş(QCQ1)>Ş(QCQ2), then QCQ1>QCQ2;

    (2). If Ş(QCQ1)=Ş(QCQ2), then:

    ⅰ) if Ȟ(QCQ1)>Ȟ(QCQ2), then QCQ1>QCQ2;

    ⅱ) if Ȟ(QCQ1)=Ȟ(QCQ2), then QCQ1=QCQ2.

    Definition 2.5. [36] For the family of QCQj(j=1,2,3,,n), the MSM operator is given by:

    MSMKSC(QCQ1,QCQ2,,QCQn)=(1j1jKSCKSCi=1QCQjiCKSCn)1KSC (2.8)

    where KSC=1,2,,n,(j1,j2,,jKSC) denotes the K-tuple kind of (1,2,..,n), and the symbol CKSCn designates the binomial co-efficient (BCO). Furthermore:

    (1). MSMKSC(QCQ,QCQ,,QCQ)=QCQ;

    (2). MSMKSC(QCQ1,QCQ2,,QCQn)MSMKSC(QCQ1,QCQ2,,QCQn), if QCQjQCQj for all j.

    (3). minjQCQjMSMKSC(QCQ1,QCQ2,,QCQn)maxjQCQj.

    Definition 2.6. [36] For the family of QCQj(j=1,2,3,,n), the GMSM operator is given by:

    GMSM(KSC,αSC1,αSC2,,αSCKSC)(QCQ1,QCQ2,,QCQn)=(1j1jKSCKSCi=1QαSCiCQjiCKSCn)1αSC1+αSC2++αSCKSC (2.9)

    where KSC=1,2,,n,αSC1,αSC2,,αSCKSC0,(j1,j2,,jKSC) signifies the K-tuple collection of (1,2,..,n). Furthermore:

    (1). GMSM(KSC,αSC1,αSC2,,αSCKSC)(QCQ,QCQ,,QCQ) =QCQ,GMSM(KSC,αSC1,αSC2,,αSCKSC)(0,0,,0)=0;

    (2). GMSM(KSC,αSC1,αSC2,,αSCKSC)(QCQ1,QCQ2,,QCQn) GMSM(KSC,αSC1,αSC2,,αSCKSC)(QCQ1,QCQ2,,QCQn), if QCQjQCQj for all j;

    (3). minjQCQjGMSM(KSC,αSC1,αSC2,,αSCKSC) (QCQ1,QCQ2,,QCQn)maxjQCQj.

    As discussed in [34], DTRS contains two status and three actions, whose information: ΩS={FB,FNB} and AAC={χPAC,χBAC,χNAC}, where ΩS is utilized for having a place and for to be not having a place and AAC is utilized for +ve, limit and -ve. Table 1 shows the lose capacities and their portrayals.

    Table 1.  Lose functions and their representations.
    Symbols FB(PAC)=Correctlysolved FNB(NAC)=wronglysolved
    χPAC QCQPACPAC QCQPACNAC
    χBAC QCQBACPAC QCQBACNAC
    χNAC QCQNACPAC QCQNACNAC
    Note: QCQPACPAC,QCQPACNAC= Expenses of right sort and mistake sort of idea ử in agreed judgment; QCQBACPAC,QCQBACNAC=Expensesoftherightsortandmistakesortoftheideainabstinacejudgment;QCQNACPAC,QCQNACNAC= Expensesoftherightsortandmistakesortoftheideainnonagreedjudgment.

     | Show Table
    DownLoad: CSV

    Form Table 1, the inequality is given as follows:

    QCQPACPAC<QCQBACPAC<QCQNACPAC, (2.10)
    QCQNACNAC<QCQBACNAC<QCQPACNAC. (2.11)

    Given the Bayesian risk choice hypothesis [34], we have:

    Pr(FB|[])+Pr(FNB|[])=1. (2.12)

    By using the Eq (2.12), the expected losses YEL(χjAC|[]),j=P,B,N, and the different actions are expressed below:

    YEL(χPAC|[])=QCQPACPACPr(FB|[])+QCQPACNACPr(FNB|[]), (2.13)
    YEL(χBAC|[])=QCQBACPACPr(FB|[])+QCQBACNACPr(FNB|[]), (2.14)
    YEL(χNAC|[])=QCQNACPACPr(FB|[])+QCQNACNACPr(FNB|[]). (2.15)

    From the above information, we have:

    PAC:WhenYEL(χPAC|[])YEL(χBAC|[])andYEL(χPAC|[])YEL(χNAC|[]),thenPOS(FP); (2.16)
    BAC:WhenYEL(χBAC|[])YEL(χPAC|[])andYEL(χBAC|[])YEL(χNAC|[]),thenBUN(FB); (2.17)
    NAC:WhenYEL(χNAC|[])YEL(χPAC|[])andYEL(χNAC|[])YEL(χBAC|[]),thenNRG(FN). (2.18)

    Based on the above analysis, we have explored the novel approach of CQRO2-TLS and its fundamental properties. These properties are also explained with the help of some numerical examples.

    Definition 3.1. A CQRO2-TLS is given by

    QCQTL={((sSLT(),αSC),(μQCQTL(),ηQCQTL())):UNI} (3.1)

    where μQCQTL()=μQCQTLRP()ei2π(μQCQTLIP()) and ηQCQTL()=ηQCQTLRP()ei2π(ηQCQTLIP()), with the conditions: 0μqSCQCQTLRP()+ηqSCQCQTLRP()1,0μqSCQCQTLIP()+ηqSCQCQTLIP()1, where qSC expresses the rational numbers and the pair (sSLT(),αSC) is called 2-TLV with αSC[12g,12g} and sSLT()SLT.

    Moreover,

    ζQCQTL()=(1(μqSCQCQTLRP()+ηqSCQCQTLRP())1qSC)ei2π(1(μqSCQCQTLIP()+ηqSCQCQTLIP())1qSC) is called the refusal grade. The CQRO2T-LN is shown by:

    QCQTL=((sSLT,αSC),(μQCQTL,ηQCQTL))=((sSLT,αSC),(μQCQTLRPei2π(μQCQTLIP),ηQCQTLRPei2π(ηQCQTLIP))).

    Definition 3.2. For any two CQRO2-TLNs

    QCQTL1=((sSLT1,αSC1),(μQCQTLRP1ei2π(μQCQTLIP1),ηQCQTLRP1ei2π(ηQCQTLIP1))) and QCQTL2=((sSLT2,αSC2),(μQCQTLRP2ei2π(μQCQTLIP2),ηQCQTLRP2ei2π(ηQCQTLIP2))),

    (1). QCQTL1CQTLQCQTL2=(ΔLT(Δ1LT(sSLT1,αSC1)+Δ1LT(sSLT2,αSC2)).((μqCQQCQTLRP1+μqCQQCQTLRP2μqCQQCQTLRP1μqCQQCQTLRP2)1qCQei2π(μqCQQCQTLIP1+μqCQQCQTLIP2μqCQQCQTLIP1μqCQQCQTLIP2)1qCQ,ηQCQTLRP1ηQCQTLRP2ei2π(ηQCQTLIP1ηQCQTLIP2)));

    (2). QCQTL1CQTLQCQTL2=(ΔLT(Δ1LT(sSLT1,αSC1)×Δ1LT(sSLT2,αSC2)).(μQCQTLRP1μQCQTLRP2ei2π(μQCQTLIP1μQCQTLIP2),(ηqCQQCQTLRP1+ηqCQQCQTLRP2ηqCQQCQTLRP1ηqCQQCQTLRP2)1qCQei2π(ηqCQQCQTLIP1+ηqCQQCQTLIP2ηqCQQCQTLIP1ηqCQQCQTLIP2)1qCQ));

    (3). QδSCCQTL1=(ΔLT(Δ1LT(sSLT1,αSC1)δSC),μδSCQCQTLRP1ei2π(μδSCQCQTLIP1),((1(1ηqCQQCQTLRP1)δSC)1qCQei2π(1(1ηqCQQCQTLIP1)δSC)1qCQ));

    (4). δSCQCQTL1=(ΔLT(δSC×Δ1LT(sSLT1,αSC1)),((1(1μqCQQCQTLRP1())δSC)1qCQei2π(1(1μqCQQCQTLIP1())δSC)1qCQ,ηδSCQCQTLRP1ei2π(ηδSCQCQTLIP1))).

    Definition 3.3. For any two QROF2-TLNs

    QCQTL1=((sSLT1,αSC1),(μQCQTLRP1ei2π(μQCQTLIP1),ηQCQTLRP1ei2π(ηQCQTLIP1))) and QCQTL2=((sSLT2,αSC2),(μQCQTLRP2ei2π(μQCQTLIP2),ηQCQTLRP2ei2π(ηQCQTLIP2))), the score and accuracy functions are given by:

    Ş(QCQTL1)=Δ1LT(sSLT1,αSC1)×(1+μqSCQCQTLRP1+μqSCQCQTLIP1ηqSCQCQTLRP1ηqSCQCQTLIP1)2, (3.2)
    Ȟ(QCQTL1)=Δ1LT(sSLT1,αSC1)×(μqSCQCQTLRP1+μqSCQCQTLIP1+ηqSCQCQTLRP1+ηqSCQCQTLIP1). (3.3)

    Based on the two above notions, the compassion between two QROF2-TLNs is given by:

    (1). If Ş(QCQTL1)>Ş(QCQTL2), then QCQTL1>QCQTL2;

    (2). If Ş(QCQTL1)=Ş(QCQTL2), then:

    ⅰ) Ȟ(QCQTL1)>Ȟ(QCQTL2), implies QCQTL1>QCQTL2;

    ⅱ) Ȟ(QCQTL1)=Ȟ(QCQTL2), implies QCQTL1=QCQTL2.

    Table 2.  Lose functions and their representations for Complex q-rung orthopair fuzzy 2-tuple linguistic variables.
    Symbols FB(PAC)=Correctlysolved FNB(NAC)=wronglysolved
    χPAC QCQTLQCQPACPAC=((sSLT(QCQPACPAC),αSC),(μQCQTLRP(QCQPACPAC)ei2π(μQCQTLIP(QCQPACPAC)),ηQCQTLRP(QCQPACPAC)ei2π(ηQCQTLIP(QCQPACPAC)))) QCQTLQCQNACPAC=((sSLT(QCQNACPAC),αSC),(μQCQTLRP(QCQNACPAC)ei2π(μQCQTLIP(QCQNACPAC)),ηQCQTLRP(QCQNACPAC)ei2π(ηQCQTLIP(QCQNACPAC))))
    χBAC QCQTLQCQBACPAC=((sSLT(QCQBACPAC),αSC),(μQCQTLRP(QCQBACPAC)ei2π(μQCQTLIP(QCQBACPAC)),ηQCQTLRP(QCQBACPAC)ei2π(ηQCQTLIP(QCQBACPAC)))) QCQTLQCQBACNAC=((sSLT(QCQBACNAC),αSC),(μQCQTLRP(QCQBACNAC)ei2π(μQCQTLIP(QCQBACNAC)),ηQCQTLRP(QCQBACNAC)ei2π(ηQCQTLIP(QCQBACNAC))))
    χNAC QCQTLQCQNACPAC=((sSLT(QCQNACPAC),αSC),(μQCQTLRP(QCQNACPAC)ei2π(μQCQTLIP(QCQNACPAC)),ηQCQTLRP(QCQNACPAC)ei2π(ηQCQTLIP(QCQNACPAC)))) QCQTLQCQNACPAC=((sSLT(QCQNACNAC),αSC),(μQCQTLRP(QCQNACNAC)ei2π(μQCQTLIP(QCQNACNAC)),ηQCQTLRP(QCQNACNAC)ei2π(ηQCQTLIP(QCQNACNAC))))
    Note: QCQTLQCQPACPAC,QCQTLQCQPACNAC= Expensesoftherightsortand mistakesortof theideainagreedjudgment; QCQTLQCQBACPAC,QCQTLQCQBACNAC=Expensesofthe rightsortandmistakesortoftheideain abstinancejudgment;QCQTLQCQNACPAC,QCQTLQCQNACNAC= Expensesoftherightsortandmistake sortoftheideainnonagreedjudgment.

     | Show Table
    DownLoad: CSV

    The purpose of this section is to explore the novel DTRS model by using the CQROF2-TLV based on three-way decisions. The LF matrix for CQROF2-TLV is presented in Table 2.

    Form Table 2, the inequality is given as:

    QCQTLQCQPACPAC<QCQTLQCQBACPAC<QCQTLQCQNACPAC, (4.1)
    QCQTLQCQNACNAC<QCQTLQCQBACNAC<QCQTLQCQPACNAC. (4.2)

    Furthermore, by using the Eq (4.2), we explain the expected losses YEL(χjAC|[]),j=P,B,N and the different actions are expressed below:

    YEL(χPAC|[])=QCQTLQCQPACPACPr(FB|[])CQTLQCQTLQCQPACPACPr(FNB|[]), (4.3)
    YEL(χBAC|[])=QCQTLQCQBACPACPr(FB|[])CQTLQCQTLQCQBACNACPr(FNB|[]), (4.4)
    YEL(χNAC|[])=QCQTLQCQNACPACPr(FB|[])CQTLQCQTLQCQNACNACPr(FNB|[]). (4.5)

    Assumed that Pr(FB|[])=δB and Pr(FNB|[])=δNB, then the Eq (4.2) is given as: δB+δNB=1.

    Theorem 4.1. With usual meanings

    YEL(χPAC|[])=(ΔLT(δB×Δ1LT(sSLT(QCQPACPAC),αSC)+δNB×Δ1LT(sSLT(QCQPACNAC),αSC)),((1(1μqCQQCQTLRP(QCQPACPAC))δB(1μqCQQCQTLRP(QCQPACNAC))δNB)1qCQ×ei2π(1(1μqCQQCQTLIP(QCQPACPAC))δB(1μqCQQCQTLIP(QCQPACNAC))δNB)1qCQ,((ηQCQTLRP(QCQPACPAC))δB×(ηQCQTLRP(QCQPACNAC))δNB)ei2π((ηQCQTLIP(QCQPACPAC))δB×(ηQCQTLIP(QCQPACNAC))δNB))), (4.6)
    YEL(χBAC|[])=(ΔLT(δB×Δ1LT(sSLT(QCQBACPAC),αSC)+δNB×Δ1LT(sSLT(QCQBACNAC),αSC)),((1(1μqCQQCQTLRP(QCQBACPAC))δB(1μqCQQCQTLRP(QCQBACNAC))δNB)1qCQ×ei2π(1(1μqCQQCQTLIP(QCQBACPAC))δB(1μqCQQCQTLIP(QCQBACNAC))δNB)1qCQ,((ηQCQTLRP(QCQBACPAC))δB×(ηQCQTLRP(QCQBACNAC))δNB)ei2π((ηQCQTLIP(QCQBACPAC))δB×(ηQCQTLIP(QCQBACNAC))δNB))), (4.7)
    YEL(χNAC|[])=(ΔLT(δB×Δ1LT(sSLT(QCQNACPAC),αSC)+δNB×Δ1LT(sSLT(QCQNACNAC),αSC)),((1(1μqCQQCQTLRP(QCQNACPAC))δB(1μqCQQCQTLRP(QCQNACNAC))δNB)1qCQ×ei2π(1(1μqCQQCQTLIP(QCQNACPAC))δB(1μqCQQCQTLIP(QCQNACNAC))δNB)1qCQ,((ηQCQTLRP(QCQNACPAC))δB×(ηQCQTLRP(QCQNACNAC))δNB)ei2π((ηQCQTLIP(QCQNACPAC))δB×(ηQCQTLIP(QCQNACNAC))δNB))). (4.8)

    Proof. First, we prove that the Eq (4.6) holds. The proofs of Eq (4.7) and of Eq (4.8) are similar.

    YEL(χPAC|[])=QCQTLQCQPACPACPr(FB|[])CQTLQCQTLQCQPACPACPr(FNB|[])=QCQTLQCQPACPACδBCQTLQCQTLQCQPACPACδNB=((ΔLT(δB×Δ1LT(sSLT(QCQPACPAC),αSC)),((1(1μqCQQCQTLRP(QCQPACPAC))δB)1qCQei2π(1(1μqCQQCQTLIP(QCQPACPAC))δB)1qCQ,(ηQCQTLRP(QCQPACPAC))δBei2π(ηQCQTLIP(QCQPACPAC))δB))CQTL(ΔLT(δNB×Δ1LT(sSLT(QCQPACNAC),αSC)),((1(1μqCQQCQTLRP(QCQPACNAC))δB)1qCQei2π(1(1μqCQQCQTLIP(QCQPACNAC))δB)1qCQ,(ηQCQTLRP(QCQPACNAC))δBei2π(ηQCQTLIP(QCQPACNAC))δB)))=(ΔLT(δB×Δ1LT(sSLT(QCQPACPAC),αSC)+δNB×Δ1LT(sSLT(QCQPACNAC),αSC)),((1(1μqCQQCQTLRP(QCQPACPAC))δB(1μqCQQCQTLRP(QCQPACNAC))δNB)1qCQ×ei2π(1(1μqCQQCQTLIP(QCQPACPAC))δB(1μqCQQCQTLIP(QCQPACNAC))δNB)1qCQ,((ηQCQTLRP(QCQPACPAC))δB×(ηQCQTLRP(QCQPACNAC))δNB)ei2π((ηQCQTLIP(QCQPACPAC))δB×(ηQCQTLIP(QCQPACNAC))δNB)))

    Moreover, we diagnose the expected values, such that

    QEV(YEL(χPAC|[]))=((δB×Δ1LT(sSLT(QCQPACPAC),αSC)+δNB×Δ1LT(sSLT(QCQPACNAC),αSC))×((1(1μqCQQCQTLRP(QCQPACPAC))δB(1μqCQQCQTLRP(QCQPACNAC))δNB)1qCQ+(1(1μqCQQCQTLIP(QCQPACPAC))δB(1μqCQQCQTLIP(QCQPACNAC))δNB)1qCQ(ηQCQTLRP(QCQPACPAC))δB(ηQCQTLRP(QCQPACNAC))δNB(ηQCQTLRP(QCQPACPAC))δB(ηQCQTLRP(QCQPACNAC))δNB))8, (4.9)
    QEV(YEL(χBAC|[]))=((δB×Δ1LT(sSLT(QCQBACPAC),αSC)+δNB×Δ1LT(sSLT(QCQBACNAC),αSC))×((1(1μqCQQCQTLRP(QCQBACPAC))δB(1μqCQQCQTLRP(QCQBACNAC))δNB)1qCQ+(1(1μqCQQCQTLIP(QCQBACPAC))δB(1μqCQQCQTLIP(QCQBACNAC))δNB)1qCQ(ηQCQTLRP(QCQBACPAC))δB(ηQCQTLRP(QCQBACNAC))δNB(ηQCQTLRP(QCQBACPAC))δB(ηQCQTLRP(QCQBACNAC))δNB))8, (4.10)
    QEV(YEL(χDAC|[]))=((δB×Δ1LT(sSLT(QCQNACPAC),αSC)+δNB×Δ1LT(sSLT(QCQNACNAC),αSC))×((1(1μqCQQCQTLRP(QCQNACPAC))δB(1μqCQQCQTLRP(QCQNACNAC))δNB)1qCQ+(1(1μqCQQCQTLIP(QCQNACPAC))δB(1μqCQQCQTLIP(QCQNACNAC))δNB)1qCQ(ηQCQTLRP(QCQNACPAC))δB(ηQCQTLRP(QCQNACNAC))δNB(ηQCQTLRP(QCQNACPAC))δB(ηQCQTLRP(QCQNACNAC))δNB))8. (4.11)

    To handle whenever the normal qualities failed to discover the connections between any of two predictable losses, we investigate the ideas of precision work, which is expressed as follows:

    GAF(YEL(χPAC|[]))=((δB×Δ1LT(sSLT(QCQPACPAC),αSC)+δNB×Δ1LT(sSLT(QCQPACNAC),αSC))×((1(1μqCQQCQTLRP(QCQPACPAC))δB(1μqCQQCQTLRP(QCQPACNAC))δNB)1qCQ+(1(1μqCQQCQTLIP(QCQPACPAC))δB(1μqCQQCQTLIP(QCQPACNAC))δNB)1qCQ+(ηQCQTLRP(QCQPACPAC))δB(ηQCQTLRP(QCQPACNAC))δNB+(ηQCQTLRP(QCQPACPAC))δB(ηQCQTLRP(QCQPACNAC))δNB))8, (4.12)
    GAF(YEL(χBAC|[]))=((δB×Δ1LT(sSLT(QCQBACPAC),αSC)+δNB×Δ1LT(sSLT(QCQBACNAC),αSC))×((1(1μqCQQCQTLRP(QCQBACPAC))δB(1μqCQQCQTLRP(QCQBACNAC))δNB)1qCQ+(1(1μqCQQCQTLIP(QCQBACPAC))δB(1μqCQQCQTLIP(QCQBACNAC))δNB)1qCQ+(ηQCQTLRP(QCQBACPAC))δB(ηQCQTLRP(QCQBACNAC))δNB+(ηQCQTLRP(QCQBACPAC))δB(ηQCQTLRP(QCQBACNAC))δNB))8, (4.13)
    GAF(YEL(χDAC|[]))=((δB×Δ1LT(sSLT(QCQNACPAC),αSC)+δNB×Δ1LT(sSLT(QCQNACNAC),αSC))×((1(1μqCQQCQTLRP(QCQNACPAC))δB(1μqCQQCQTLRP(QCQNACNAC))δNB)1qCQ+(1(1μqCQQCQTLIP(QCQNACPAC))δB(1μqCQQCQTLIP(QCQNACNAC))δNB)1qCQ+(ηQCQTLRP(QCQNACPAC))δB(ηQCQTLRP(QCQNACNAC))δNB+(ηQCQTLRP(QCQNACPAC))δB(ηQCQTLRP(QCQNACNAC))δNB))8. (4.14)

    Furthermore, we diagnosed the three-way decision rules, such that

    PAC1:WhenQEV(YEL(χPAC|[]))<QEV(YEL(χBAC|[]))QEV(YEL(χPAC|[]))=QEV(YEL(χBAC|[]))GAF(YEL(χPAC|[]))GAF(YEL(χBAC|[]))QEV(YEL(χPAC|[]))<QEV(YEL(χNAC|[]))QEV(YEL(χPAC|[]))=QEV(YEL(χNAC|[]))GAF(YEL(χPAC|[]))GAF(YEL(χNAC|[])),thenPOS(FP); (4.15)
    BAC1:WhenQEV(YEL(χBAC|[]))<QEV(YEL(χPAC|[]))QEV(YEL(χBAC|[]))=QEV(YEL(χPAC|[]))GAF(YEL(χBAC|[]))GAF(YEL(χPAC|[]))QEV(YEL(χBAC|[]))<QEV(YEL(χNAC|[]))QEV(YEL(χBAC|[]))=QEV(YEL(χNAC|[]))GAF(YEL(χBAC|[]))GAF(YEL(χNAC|[])),thenBUN(FP); (4.16)
    NAC1:WhenQEV(YEL(χNAC|[]))<QEV(YEL(χPAC|[]))QEV(YEL(χNAC|[]))=QEV(YEL(χPAC|[]))GAF(YEL(χNAC|[]))GAF(YEL(χPAC|[]))QEV(YEL(χNAC|[]))<QEV(YEL(χBAC|[]))QEV(YEL(χNAC|[]))=QEV(YEL(χBAC|[]))GAF(YEL(χNAC|[]))GAF(YEL(χBAC|[])),thenNEG(FP). (4.17)

    Therefore,

    GAF(QCQTLQCQPACPAC)<GAF(QCQTLQCQBACPAC)<GAF(QCQTLQCQNACPAC), (4.18)
    GAF(QCQTLQCQNACNAC)<GAF(QCQTLQCQBACNAC)<GAF(QCQTLQCQPACNAC). (4.19)

    The interrelationship among the different attributes in real decision theory is ever-present. The MSM and GMSM operators are efficient techniques to perfectly evaluate the interrelation between the characteristics. The purpose of this communication is to explore the GMSM and weighted GMSM based on CQRO2-TLVs.

    Definition 5.1. For the family of CQRO2-TLVs QCQTLj(j=1,2,3,,n), the CQRO2-TLGMSM operator is given by:

    CQRO2TLGMSM(KSC,αSC1,αSC2,,αSCKSC)(QCQTL1,QCQTL2,,QCQTLn)=(1j1jKSCKSCi=1QαSCiCQTLjiCKSCn)1αSC1+αSC2++αSCKSC (5.1)

    where KSC=1,2,,n,αSC1,αSC2,,αSCKSC0,(j1,j2,,jKSC) designates the K-tuple collection of (1,2,..,n).

    Theorem 5.2. For the family of CQRO2-TLVs

    QCQTLj=((sSLTj,αSCj),(μQCQTLRPjei2π(μQCQTLIPj),ηQCQTLRPjei2π(ηQCQTLIPj)))(j=1,2,3,,n),

    CQRO2TLGMSM(KSC,αSC1,αSC2,,αSCKSC)(QCQTL1,QCQTL2,,QCQTLn)=(ΔLT((1j1jKSCKSCi=1(Δ1LT(sSLTji,αSCji))αSCiCKSCn)1αSC1+αSC2++αSCKSC),(((11j1jKSC(1KSCi=1(μqCQQCQTLRPj)αSCi)1CKSCn)1αSC1+αSC2++αSCKSC)1qCQ×ei2π(((11j1jKSC(1KSCi=1(μqCQQCQTLIPj)αSCi)1CKSCn)1αSC1+αSC2++αSCKSC)1qCQ),(1(11j1jKSC(1KSCi=1(1ηqCQQCQTLRPj)αSCi)1CKSCn)1αSC1+αSC2++αSCKSC)1qCQ×ei2π(1(11j1jKSC(1KSCi=1(1ηqCQQCQTLIPj)αSCi)1CKSCn)1αSC1+αSC2++αSCKSC)1qCQ)). (5.2)

    Proof. Let

    KSCi=1QαSCiCQTLji=(ΔLT(KSCi=1(Δ1LT(sSLTji,αSCji))αSCi),(KSCi=1(μQCQTLRPj)αSCiei2π(KSCi=1(μQCQTLIPj)αSCi),(1KSCi=1(1ηqCQQCQTLRPj)αSCi)1qCQei2π(1KSCi=1(1ηqCQQCQTLIPj)αSCi)1qCQ))
    1j1jKSCKSCi=1QαSCiCQTLji=(ΔLT(1j1jKSCKSCi=1(Δ1LT(sSLTji,αSCji))αSCi),((11j1jKSC(1KSCi=1(μqSCQCQTLRPj)αSCi))1qSCei2π(11j1jKSC(1KSCi=1(μqSCQCQTLIPj)αSCi))1qSC,1j1jKSC(1KSCi=1(1ηqCQQCQTLRPj)αSCi)1qCQei2π(1j1jKSC(1KSCi=1(1ηqCQQCQTLIPj)αSCi)1qCQ)))
    1j1jKSCKSCi=1QαSCiCQTLjiCKSCn=(ΔLT(1j1jKSCKSCi=1(Δ1LT(sSLTji,αSCji))αSCiCKSCn),((11j1jKSC(1KSCi=1(μqCQQCQTLRPj)αSCi)1CKSCn)1qCQei2π(11j1jKSC(1KSCi=1(μqCQQCQTLIPj)αSCi)1CKSCn)1qCQ,(1j1jKSC(1KSCi=1(1ηqCQQCQTLRPj)αSCi)1CKSCn)1qCQei2π(1j1jKSC(1KSCi=1(1ηqCQQCQTLIPj)αSCi)1CKSCn)1qCQ))
    (1j1jKSCKSCi=1QαSCiCQTLjiCKSCn)1αSC1+αSC2++αSCKSC=(ΔLT((1j1jKSCKSCi=1(Δ1LT(sSLTji,αSCji))αSCiCKSCn)1αSC1+αSC2++αSCKSC),(((11j1jKSC(1KSCi=1(μqCQQCQTLRPj)αSCi)1CKSCn)1αSC1+αSC2++αSCKSC)1qCQ×ei2π(((11j1jKSC(1KSCi=1(μqCQQCQTLIPj)αSCi)1CKSCn)1αSC1+αSC2++αSCKSC)1qCQ),(1(11j1jKSC(1KSCi=1(1ηqCQQCQTLRPj)αSCi)1CKSCn)1αSC1+αSC2++αSCKSC)1qCQ×ei2π(1(11j1jKSC(1KSCi=1(1ηqCQQCQTLIPj)αSCi)1CKSCn)1αSC1+αSC2++αSCKSC)1qCQ))

    Furthermore, by using the investigated operators, we discuss certain properties such as idempotency, commutativity, monotonicity, and boundedness, which are very important for the proposed work.

    Theorem 5.3. For the family of CQRO2-TLVs

    QCQTLj=((sSLTj,αSCj),(μQCQTLRPjei2π(μQCQTLIPj),ηQCQTLRPjei2π(ηQCQTLIPj)))(j=1,2,3,,n), then

    (1). QCQTLj=QCQTL=((sSLT,αSC),(μQCQTLRPei2π(μQCQTLIP),ηQCQTLRPei2π(ηQCQTLIP))), implies CQRO2TLGMSM(KSC,αSC1,αSC2,,αSCKSC)(QCQTL1,QCQTL2,,QCQTLn) =QCQTL.

    (2). if QCQTLj is a collection of CQRO2-TLVs and Q'CQTLj is any replacement of QCQTLj, then CQRO2TLGMSM(KSC,αSC1,αSC2,,αSCKSC)(QCQTL1,QCQTL2,,QCQTLn)=CQRO2TLGMSM(KSC,αSC1,αSC2,,αSCKSC)(Q'CQTL1,Q'CQTL2,,Q'CQTLn).

    (3). if QCQTLj and Q'CQTLj are any two collections of CQRO2-TLVs with a situations s.t.QCQTLjQ'CQTLj,

    then CQRO2TLGMSM(KSC,αSC1,αSC2,,αSCKSC)(QCQTL1,QCQTL2,,QCQTLn)CQRO2TLGMSM(KSC,αSC1,αSC2,,αSCKSC)(Q'CQTL1,Q'CQTL2,,Q'CQTLn).

    (4). if QCQTLj=((sSLTj,αSCj),(μQCQTLRPjei2π(μQCQTLIPj),ηQCQTLRPjei2π(ηQCQTLIPj))) is a family of the CQRO2-TLVs, then

    min(QCQTL1,QCQTL2,,QCQTLn)CQRO2TLGMSM(KSC,αSC1,αSC2,,αSCKSC)(QCQTL1,QCQTL2,,QCQTLn)max(QCQTL1,QCQTL2,,QCQTLn)

    Proof.

    (1). If QCQTL=((sSLT,αSC),(μQCQTLRPei2π(μQCQTLIP),ηQCQTLRPei2π(ηQCQTLIP))), then we have

    CQRO2TLGMSM(KSC,αSC1,αSC2,,αSCKSC)(QCQTL1,QCQTL2,,QCQTLn)=(ΔLT((1j1jKSCKSCi=1(Δ1LT(sSLTji,αSCji))αSCiCKSCn)1αSC1+αSC2++αSCKSC),(((11j1jKSC(1KSCi=1(μqCQQCQTLRPj)αSCi)1CKSCn)1αSC1+αSC2++αSCKSC)1qCQ×ei2π(((11j1jKSC(1KSCi=1(μqCQQCQTLIPj)αSCi)1CKSCn)1αSC1+αSC2++αSCKSC)1qCQ),(1(11j1jKSC(1KSCi=1(1ηqCQQCQTLRPj)αSCi)1CKSCn)1αSC1+αSC2++αSCKSC)1qCQ×ei2π(1(11j1jKSC(1KSCi=1(1ηqCQQCQTLIPj)αSCi)1CKSCn)1αSC1+αSC2++αSCKSC)1qCQ))=(ΔLT((1j1jKSCKSCi=1(Δ1LT(sSLT,αSC))αSCiCKSCn)1αSC1+αSC2++αSCKSC),(((11j1jKSC(1KSCi=1(μqCQQCQTLRP)αSCi)1CKSCn)1αSC1+αSC2++αSCKSC)1qCQ×ei2π(((11j1jKSC(1KSCi=1(μqCQQCQTLIP)αSCi)1CKSCn)1αSC1+αSC2++αSCKSC)1qCQ),(1(11j1jKSC(1KSCi=1(1ηqCQQCQTLRP)αSCi)1CKSCn)1αSC1+αSC2++αSCKSC)1qCQ×ei2π(1(11j1jKSC(1KSCi=1(1ηqCQQCQTLIP)αSCi)1CKSCn)1αSC1+αSC2++αSCKSC)1qCQ))
    =(ΔLT(((Δ1LT(sSLT,αSC))αSCiCKSCn)1αSC1+αSC2++αSCKSC),(((1(1(μqCQQCQTLRP)αSCi)1CKSCn)1αSC1+αSC2++αSCKSC)1qCQei2π((1(1(μqCQQCQTLIP)αSCi)1CKSCn)1αSC1+αSC2++αSCKSC)1qCQ,(1(1(1(1ηqCQQCQTLRP)αSCi)1CKSCn)1αSC1+αSC2++αSCKSC)1qCQ×ei2π(1(1(1(1ηqCQQCQTLIP)αSCi)1CKSCn)1αSC1+αSC2++αSCKSC)1qCQ))=((sSLT,αSC),(μQCQTLRPei2π(μQCQTLIP),ηQCQTLRPei2π(ηQCQTLIP)))=QCQTL.

    (2). By supposition, we have QCQTLj, a collection CQRO2-TLVs and Q'CQTLj is a slight substitution of QCQTLj, then

    CQRO2TLGMSM(KSC,αSC1,αSC2,,αSCKSC)(QCQTL1,QCQTL2,,QCQTLn)=(1j1jKSCKSCi=1QαSCiCQTLjiCKSCn)1αSC1+αSC2++αSCKSC=(1j1jKSCKSCi=1Q'αSCiCQTLjiCKSCn)1αSC1+αSC2++αSCKSC=CQRO2TLGMSM(KSC,αSC1,αSC2,,αSCKSC)(Q'CQTL1,Q'CQTL2,,Q'CQTLn).

    (3). By hypothesis, we have if QCQTLj and Q'CQTLj are any two collectionss of CQRO2-TLVs with the settings s.t. QCQTLjQ'CQTLj, then

    KSCi=1QαSCiCQTLjiKSCi=1Q'αSCiCQTLji1j1jKSCKSCi=1QαSCiCQTLji1j1jKSCKSCi=1Q'αSCiCQTLji1j1jKSCKSCi=1QαSCiCQTLjiCKSCn1j1jKSCKSCi=1Q'αSCiCQTLjiCKSCn(1j1jKSCKSCi=1QαSCiCQTLjiCKSCn)1αSC1+αSC2++αSCKSC(1j1jKSCKSCi=1Q'αSCiCQTLjiCKSCn)1αSC1+αSC2++αSCKSC.

    Hence

    CQRO2TLGMSM(KSC,αSC1,αSC2,,αSCKSC)(QCQTL1,QCQTL2,,QCQTLn)CQRO2TLGMSM(KSC,αSC1,αSC2,,αSCKSC)(Q'CQTL1,Q'CQTL2,,Q'CQTLn).

    (4). By hypothesis, if we have

    QCQTLj=((sSLTj,αSCj),(μQCQTLRPjei2π(μQCQTLIPj),ηQCQTLRPjei2π(ηQCQTLIPj))),

    a collection of the CQRO2-TLVs, QCQTLj=min(QCQTL1,QCQTL2,,QCQTLn) and Q+CQTLj=max(QCQTL1,QCQTL2,,QCQTLn), then by means of the properties (1) & (3), we have

    CQRO2TLGMSM(KSC,αSC1,αSC2,,αSCKSC)(QCQTL1,QCQTL2,,QCQTLn)CQRO2TLGMSM(KSC,αSC1,αSC2,,αSCKSC)(QCQTL1,QCQTL2,,QCQTLn)=QCQTL
    CQRO2TLGMSM(KSC,αSC1,αSC2,,αSCKSC)(QCQTL1,QCQTL2,,QCQTLn)CQRO2TLGMSM(KSC,αSC1,αSC2,,αSCKSC)(Q+CQTL1,Q+CQTL2,,Q+CQTLn)=Q+CQTL.

    Hence

    min(QCQTL1,QCQTL2,,QCQTLn)CQRO2TLGMSM(KSC,αSC1,αSC2,,αSCKSC)(QCQTL1,QCQTL2,,QCQTLn)max(QCQTL1,QCQTL2,,QCQTLn).

    Definition 5.4. For the family of CQRO2-TLVs QQTLj(j=1,2,3,,n), the WGMSM operator is given by:

    WCQRO2TLGMSM(KSC,αSC1,αSC2,,αSCKSC)(QCQTL1,QCQTL2,,QCQTLn)=(1j1jKSCKSCi=1(ΩWVjiQCQTLji)αSCiCKSCn)1αSC1+αSC2++αSCKSC (5.3)

    where KSC=1,2,,n,αSC1,αSC2,,αSCKSC0,(j1,j2,,jKSC) represents the K-tuple collection of (1,2,..,n), and ΩWV=(ΩWV1,ΩWV2,,ΩWVn)T with the condition nj=1ΩWVj=1.

    Theorem 5.5. For the family of CQRO2-TLVs

    QCQTLj=((sSLTj,αSCj),(μQCQTLRPjei2π(μQCQTLIPj),ηQCQTLRPjei2π(ηQCQTLIPj)))(j=1,2,3,,n),
    WCQRO2TLGMSM(KSC,αSC1,αSC2,,αSCKSC)(QCQTL1,QCQTL2,,QCQTLn)=(ΔLT((1j1jKSCKSCi=1(ΩWVjiΔ1LT(sSLTji,αSCji))αSCiCKSCn)1αSC1+αSC2++αSCKSC),(((11j1jKSC(1KSCi=1(1(1μqCQQCQTLRPj)ΩWVji)αSCi)1CKSCn)1αSC1+αSC2++αSCKSC)1qCQ×ei2π((11j1jKSC(1KSCi=1(1(1μqCQQCQTLIPj)ΩWVji)αSCi)1CKSCn)1αSC1+αSC2++αSCKSC)1qCQ,(1(11j1jKSC(1KSCi=1(1ηΩWVjiqCQQCQTLRPj())αSCi)1CKSCn)1αSC1+αSC2++αSCKSC)1qCQ×ei2π(1(11j1jKSC(1KSCi=1(1ηΩWVjiqCQQCQTLIPj())αSCi)1CKSCn)1αSC1+αSC2++αSCKSC)1qCQ)). (5.4)

    Proof. Straightforward.

    Theorem 5.6. For the family of CQROF2-TLVs

    QCQTLj=((sSLTj,αSCj),(μQCQTLRPjei2π(μQCQTLIPj),ηQCQTLRPjei2π(ηQCQTLIPj)))(j=1,2,3,,n),

    (1). if QCQTLj=QCQTL=((sSLT,αSC),(μQCQTLRPei2π(μQCQTLIP),ηQCQTLRPei2π(ηQCQTLIP))), then WCQRO2TLGMSM(KSC,αSC1,αSC2,,αSCKSC)(QCQTL1,QCQTL2,,QCQTLn)=QCQTL.

    (2). if QCQTLj is a collection CQRO2-TLVs & Q'CQTLj is a replacement for QCQTLj, then

    WCQRO2TLGMSM(KSC,αSC1,αSC2,,αSCKSC)(QCQTL1,QCQTL2,,QCQTLn)=WCQRO2TLGMSM(KSC,αSC1,αSC2,,αSCKSC)(Q'CQTL1,Q'CQTL2,,Q'CQTLn).

    (3). if QCQTLj and Q'CQTLj are two collections of CQRO2-TLVs with the settings s.t. QCQTLjQ'CQTLj, then

    WCQRO2TLGMSM(KSC,αSC1,αSC2,,αSCKSC)(QCQTL1,QCQTL2,,QCQTLn)WCQRO2TLGMSM(KSC,αSC1,αSC2,,αSCKSC)(Q'CQTL1,Q'CQTL2,,Q'CQTLn).

    (4). if QCQTLj=((sSLTj,αSCj),(μQCQTLRPjei2π(μQCQTLIPj),ηQCQTLRPjei2π(ηQCQTLIPj))) is a family of the CQRO2-TLVs, then

    min(QCQTL1,QCQTL2,,QCQTLn)WCQRO2TLGMSM(KSC,αSC1,αSC2,,αSCKSC)(QCQTL1,QCQTL2,,QCQTLn)max(QCQTL1,QCQTL2,,QCQTLn).

    Proof. Straightforward.

    We explored the LF based on CQRO2-TLVs and 3WD for CQRO2-TLVs, where the probability vector is represented by: AAC={χPAC,χBAC,χNAC}, ΩS={FB,FNB} and

    D={Pr(FB|[]),Pr(FNB|[])} with Pr(FB|[])+Pr(FNB|[])=1. Then, we have the following:

    Step 1: By using the Eqs (6.1)–(6.3), we examine the LFs, which are stated below:

    QCQTLQCQPACPAC=((sSLT(QCQPACPAC),αSC),(μQCQTLRP(QCQPACPAC)ei2π(μQCQTLIP(QCQPACPAC)),ηQCQTLRP(QCQPACPAC)ei2π(ηQCQTLIP(QCQPACPAC))))andQCQTLQCQBACPAC=((sSLT(QCQBACPAC),αSC),(μQCQTLRP(QCQBACPAC)ei2π(μQCQTLIP(QCQBACPAC)),ηQCQTLRP(QCQBACPAC)ei2π(ηQCQTLIP(QCQBACPAC)))); (6.1)
    QCQTLQCQNACPAC=((sSLT(QCQNACPAC),αSC),(μQCQTLRP(QCQNACPAC)ei2π(μQCQTLIP(QCQNACPAC)),ηQCQTLRP(QCQNACPAC)ei2π(ηQCQTLIP(QCQNACPAC))))andQCQTLQCQPACNAC=((sSLT(QCQPACNAC),αSC),(μQCQTLRP(QCQPACNAC)ei2π(μQCQTLIP(QCQPACNAC)),ηQCQTLRP(QCQPACNAC)ei2π(ηQCQTLIP(QCQPACNAC)))); (6.2)
    QCQTLQCQBACNAC=((sSLT(QCQBACNAC),αSC),(μQCQTLRP(QCQBACNAC)ei2π(μQCQTLIP(QCQBACNAC)),ηQCQTLRP(QCQBACNAC)ei2π(ηQCQTLIP(QCQBACNAC))))andQCQTLQCQNACNAC=((sSLT(QCQNACNAC),αSC),(μQCQTLRP(QCQNACNAC)ei2π(μQCQTLIP(QCQNACNAC)),ηQCQTLRP(QCQNACNAC)ei2π(ηQCQTLIP(QCQNACNAC)))). (6.3)

    Step 2: By using the Eq (6.4), we aggregate the decision matrix which is construct by decision experts.

    WCQRO2TLGMSM(KSC,αSC1,αSC2,,αSCKSC)(Q1CQTLij,Q2CQTLij,,QnCQTLij)=QCQTLij. (6.4)

    Step 3: By using the Eqs (6.5)–(6.7), we examine the expected losses YEL(χjAC|[]),j=P,B,N and the different actions are expressed below:

    YEL(χPAC|[])=QCQTLQCQPACPACPr(FB|[])CQTLQCQTLQCQPACPACPr(FNB|[]) (6.5)
    YEL(χBAC|[])=QCQTLQCQBACPACPr(FB|[])CQTLQCQTLQCQBACNACPr(FNB|[]) (6.6)
    YEL(χNAC|[])=QCQTLQCQNACPACPr(FB|[])CQTLQCQTLQCQNACNACPr(FNB|[]) (6.7)

    Step 4: By using the Eqs (6.8)–(6.10), we examine the expected values, which are stated below:

    QEV(YEL(χPAC|[]))=((δB×Δ1LT(sSLT(QCQPACPAC),αSC)+δNB×Δ1LT(sSLT(QCQPACNAC),αSC))×((1(1μqCQQCQTLRP(QCQPACPAC))δB(1μqCQQCQTLRP(QCQPACNAC))δNB)1qCQ+(1(1μqCQQCQTLIP(QCQPACPAC))δB(1μqCQQCQTLIP(QCQPACNAC))δNB)1qCQ(ηQCQTLRP(QCQPACPAC))δB(ηQCQTLRP(QCQPACNAC))δNB(ηQCQTLRP(QCQPACPAC))δB(ηQCQTLRP(QCQPACNAC))δNB))8, (6.8)
    QEV(YEL(χBAC|[]))=((δB×Δ1LT(sSLT(QCQBACPAC),αSC)+δNB×Δ1LT(sSLT(QCQBACNAC),αSC))×((1(1μqCQQCQTLRP(QCQBACPAC))δB(1μqCQQCQTLRP(QCQBACNAC))δNB)1qCQ+(1(1μqCQQCQTLIP(QCQBACPAC))δB(1μqCQQCQTLIP(QCQBACNAC))δNB)1qCQ(ηQCQTLRP(QCQBACPAC))δB(ηQCQTLRP(QCQBACNAC))δNB(ηQCQTLRP(QCQBACPAC))δB(ηQCQTLRP(QCQBACNAC))δNB))8, (6.9)
    QEV(YEL(χDAC|[]))=((δB×Δ1LT(sSLT(QCQNACPAC),αSC)+δNB×Δ1LT(sSLT(QCQNACNAC),αSC))×((1(1μqCQQCQTLRP(QCQNACPAC))δB(1μqCQQCQTLRP(QCQNACNAC))δNB)1qCQ+(1(1μqCQQCQTLIP(QCQNACPAC))δB(1μqCQQCQTLIP(QCQNACNAC))δNB)1qCQ(ηQCQTLRP(QCQNACPAC))δB(ηQCQTLRP(QCQNACNAC))δNB(ηQCQTLRP(QCQNACPAC))δB(ηQCQTLRP(QCQNACNAC))δNB))8. (6.10)

    Moreover, we investigate the accuracy function, which is stated below:

    GAF(YEL(χPAC|[]))=((δB×Δ1LT(sSLT(QCQPACPAC),αSC)+δNB×Δ1LT(sSLT(QCQPACNAC),αSC))×((1(1μqCQQCQTLRP(QCQPACPAC))δB(1μqCQQCQTLRP(QCQPACNAC))δNB)1qCQ+(1(1μqCQQCQTLIP(QCQPACPAC))δB(1μqCQQCQTLIP(QCQPACNAC))δNB)1qCQ+(ηQCQTLRP(QCQPACPAC))δB(ηQCQTLRP(QCQPACNAC))δNB+(ηQCQTLRP(QCQPACPAC))δB(ηQCQTLRP(QCQPACNAC))δNB))8, (6.11)
    GAF(YEL(χBAC|[]))=((δB×Δ1LT(sSLT(QCQBACPAC),αSC)+δNB×Δ1LT(sSLT(QCQBACNAC),αSC))×((1(1μqCQQCQTLRP(QCQBACPAC))δB(1μqCQQCQTLRP(QCQBACNAC))δNB)1qCQ+(1(1μqCQQCQTLIP(QCQBACPAC))δB(1μqCQQCQTLIP(QCQBACNAC))δNB)1qCQ+(ηQCQTLRP(QCQBACPAC))δB(ηQCQTLRP(QCQBACNAC))δNB+(ηQCQTLRP(QCQBACPAC))δB(ηQCQTLRP(QCQBACNAC))δNB))8, (6.12)
    GAF(YEL(χDAC|[]))=((δB×Δ1LT(sSLT(QCQNACPAC),αSC)+δNB×Δ1LT(sSLT(QCQNACNAC),αSC))×((1(1μqCQQCQTLRP(QCQNACPAC))δB(1μqCQQCQTLRP(QCQNACNAC))δNB)1qCQ+(1(1μqCQQCQTLIP(QCQNACPAC))δB(1μqCQQCQTLIP(QCQNACNAC))δNB)1qCQ+(ηQCQTLRP(QCQNACPAC))δB(ηQCQTLRP(QCQNACNAC))δNB+(ηQCQTLRP(QCQNACPAC))δB(ηQCQTLRP(QCQNACNAC))δNB))8. (6.13)

    Step 5: By using the Eqs (6.14)–(6.16), we examine the three-way decision rules, which are listed below:

    PAC1:WhenQEV(YEL(χPAC|[]))GAF(YEL(χNAC|[])),thenPOS(FP); (6.14)
    {B}_{AC-1} : \;{\rm{When}}\; {Q}_{EV}\left({Y}_{EL}\left({\chi }_{{B}_{AC}}\left|\right[ử]\right)\right)\le {G}_{AF}\left({Y}_{EL}\left({\chi }_{{N}_{AC}}\left|\right[ử]\right)\right) , \;{\rm{then}}\; ử\in BUN\left({\mathcal{F}}_{P}\right) ; (6.15)
    {N}_{AC-1} : \;{\rm{When}}\; {Q}_{EV}\left({Y}_{EL}\left({\chi }_{{N}_{AC}}\left|\right[ử]\right)\right)\le {G}_{AF}\left({Y}_{EL}\left({\chi }_{{B}_{AC}}\left|\right[ử]\right)\right) , \;{\rm{then}}\; ử\in NEG\left({\mathcal{F}}_{P}\right) ; (6.16)

    Step 6: This ends the proof.

    Example 6.1. Three decision experts \left\{{ử}_{1}, {ử}_{2}, {ử}_{3}\right\} and {D}_{DE-k}(k = \mathrm{1, 2}, 3) with weight vectors {\mathrm{\Omega }}_{\mathcal{W}\mathcal{V}} = {\left(\mathrm{0.4, 0.35, 0.25}\right)}^{\boldsymbol{T}} are given. Then, \mathrm{Pr}\left({\mathcal{F}}_{B}\left|\right[{ử}_{j}]\right) = 0.3, j = 1, 2, 3, 4 .

    Step 1: By using the Eqs (6.1)–(6.3), we examine the LFs of the Tables 3, 4, and 5, which are stated below:

    Table 3.  Information matrix {\mathcal{D}}_{1} .
    {\mathrm{ử}}_{1} {\mathrm{ử}}_{2}
    {\mathcal{F}}_{B} {\sim \mathcal{F}}_{NB} {\mathcal{F}}_{B} {\sim \mathcal{F}}_{NB}
    {\boldsymbol{\chi }}_{{\boldsymbol{P}}_{\boldsymbol{A}\boldsymbol{C}}} \left(\begin{array}{c}\left({s}_{1}, 0.01\right), \\ \left(\begin{array}{c}0.6{e}^{i2\pi \left(0.6\right)}, \\ 0.4{e}^{i2\pi \left(0.4\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{4}, 0.02\right), \\ \left(\begin{array}{c}0.5{e}^{i2\pi \left(0.5\right)}, \\ 0.4{e}^{i2\pi \left(0.4\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{0}, 0.03\right), \\ \left(\begin{array}{c}0.7{e}^{i2\pi \left(0.7\right)}, \\ 0.1{e}^{i2\pi \left(0.1\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{4}, 0.04\right), \\ \left(\begin{array}{c}0.6{e}^{i2\pi \left(0.6\right)}, \\ 0.2{e}^{i2\pi \left(0.2\right)}\end{array}\right)\end{array}\right)
    {\boldsymbol{\chi }}_{{\boldsymbol{B}}_{\boldsymbol{A}\boldsymbol{C}}} \left(\begin{array}{c}\left({s}_{3}, 0.011\right), \\ \left(\begin{array}{c}0.7{e}^{i2\pi \left(0.7\right)}, \\ 0.2{e}^{i2\pi \left(0.2\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{2}, 0.021\right), \\ \left(\begin{array}{c}0.6{e}^{i2\pi \left(0.6\right)}, \\ 0.4{e}^{i2\pi \left(0.4\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{5}, 0.031\right), \\ \left(\begin{array}{c}0.6{e}^{i2\pi \left(0.6\right)}, \\ 0.3{e}^{i2\pi \left(0.3\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{2}, 0.041\right), \\ \left(\begin{array}{c}0.8{e}^{i2\pi \left(0.8\right)}, \\ 0.2{e}^{i2\pi \left(0.2\right)}\end{array}\right)\end{array}\right)
    {\boldsymbol{\chi }}_{{\boldsymbol{N}}_{\boldsymbol{A}\boldsymbol{C}}} \left(\begin{array}{c}\left({s}_{4}, 0.012\right), \\ \left(\begin{array}{c}0.8{e}^{i2\pi \left(0.8\right)}\\, 0.1{e}^{i2\pi \left(0.1\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{1}, 0.022\right), \\ \left(\begin{array}{c}0.7{e}^{i2\pi \left(0.7\right)}, \\ 0.2{e}^{i2\pi \left(0.2\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{4}, 0.032\right), \\ \left(\begin{array}{c}0.7{e}^{i2\pi \left(0.7\right)}, \\ 0.2{e}^{i2\pi \left(0.2\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{1}, 0.042\right), \\ \left(\begin{array}{c}0.7{e}^{i2\pi \left(0.7\right)}, \\ 0.3{e}^{i2\pi \left(0.3\right)}\end{array}\right)\end{array}\right)
    {\mathrm{ử}}_{3} {\mathrm{ử}}_{4}
    {\boldsymbol{\chi }}_{{\boldsymbol{P}}_{\boldsymbol{A}\boldsymbol{C}}} \left(\begin{array}{c}\left({s}_{0}, 0.013\right), \\ \left(\begin{array}{c}0.7{e}^{i2\pi \left(0.7\right)}, \\ 0.3{e}^{i2\pi \left(0.3\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{4}, 0.023\right), \\ \left(\begin{array}{c}0.5{e}^{i2\pi \left(0.5\right)}, \\ 0.4{e}^{i2\pi \left(0.4\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{1}, 0.033\right), \\ \left(\begin{array}{c}0.6{e}^{i2\pi \left(0.6\right)}, \\ 0.4{e}^{i2\pi \left(0.4\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{4}, 0.04\right), \\ \left(\begin{array}{c}0.6{e}^{i2\pi \left(0.6\right)}, \\ 0.2{e}^{i2\pi \left(0.2\right)}\end{array}\right)\end{array}\right)
    {\boldsymbol{\chi }}_{{\boldsymbol{B}}_{\boldsymbol{A}\boldsymbol{C}}} \left(\begin{array}{c}\left({s}_{2}, 0.014\right), \\ \left(\begin{array}{c}0.7{e}^{i2\pi \left(0.7\right)}, \\ 0.2{e}^{i2\pi \left(0.2\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{2}, 0.024\right), \\ \left(\begin{array}{c}0.7{e}^{i2\pi \left(0.7\right)}, \\ 0.3{e}^{i2\pi \left(0.3\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{2}, 0.034\right), \\ \left(\begin{array}{c}0.7{e}^{i2\pi \left(0.7\right)}, \\ 0.3{e}^{i2\pi \left(0.3\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{2}, 0.043\right), \\ \left(\begin{array}{c}0.8{e}^{i2\pi \left(0.8\right)}, \\ 0.1{e}^{i2\pi \left(0.1\right)}\end{array}\right)\end{array}\right)
    {\boldsymbol{\chi }}_{{\boldsymbol{N}}_{\boldsymbol{A}\boldsymbol{C}}} \left(\begin{array}{c}\left({s}_{3}, 0.015\right), \\ \left(\begin{array}{c}0.8{e}^{i2\pi \left(0.8\right)}, \\ 0.1{e}^{i2\pi \left(0.1\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{1}, 0.025\right), \\ \left(\begin{array}{c}0.8{e}^{i2\pi \left(0.8\right)}, \\ 0.2{e}^{i2\pi \left(0.2\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{3}, 0.035\right), \\ \left(\begin{array}{c}0.8{e}^{i2\pi \left(0.8\right)}, \\ 0.1{e}^{i2\pi \left(0.1\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{1}, 0.044\right), \\ \left(\begin{array}{c}0.7{e}^{i2\pi \left(0.7\right)}, \\ 0.2{e}^{i2\pi \left(0.2\right)}\end{array}\right)\end{array}\right)

     | Show Table
    DownLoad: CSV
    Table 4.  Information matrix by {\mathcal{D}}_{2} .
    {\bf{ử}}_{1} {\bf{ử}}_{2}
    {\mathcal{F}}_{B} {\sim \mathcal{F}}_{NB} {\mathcal{F}}_{B} {\sim \mathcal{F}}_{NB}
    {\boldsymbol{\chi }}_{{\boldsymbol{P}}_{\boldsymbol{A}\boldsymbol{C}}} \left(\begin{array}{c}\left({s}_{0}, 0.02\right), \\ \left(\begin{array}{c}0.8{e}^{i2\pi \left(0.8\right)}, \\ 0.1{e}^{i2\pi \left(0.1\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{4}, 0.03\right), \\ \left(\begin{array}{c}0.6{e}^{i2\pi \left(0.6\right)}, \\ 0.2{e}^{i2\pi \left(0.2\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{0}, 0.04\right), \\ \left(\begin{array}{c}0.5{e}^{i2\pi \left(0.5\right)}, \\ 0.4{e}^{i2\pi \left(0.4\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{5}, 0.05\right), \\ \left(\begin{array}{c}0.7{e}^{i2\pi \left(0.7\right)}, \\ 0.3{e}^{i2\pi \left(0.3\right)}\end{array}\right)\end{array}\right)
    {\boldsymbol{\chi }}_{{\boldsymbol{B}}_{\boldsymbol{A}\boldsymbol{C}}} \left(\begin{array}{c}\left({s}_{1}, 0.021\right), \\ \left(\begin{array}{c}0.81{e}^{i2\pi \left(0.81\right)}, \\ 0.11{e}^{i2\pi \left(0.11\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{3}, 0.031\right), \\ \left(\begin{array}{c}0.61{e}^{i2\pi \left(0.61\right)}, \\ 0.21{e}^{i2\pi \left(0.21\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{1}, 0.041\right), \\ \left(\begin{array}{c}0.1{e}^{i2\pi \left(0.1\right)}, \\ 0.41{e}^{i2\pi \left(0.41\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{4}, 0.01\right), \\ \left(\begin{array}{c}0.71{e}^{i2\pi \left(0.71\right)}, \\ 0.21{e}^{i2\pi \left(0.21\right)}\end{array}\right)\end{array}\right)
    {\boldsymbol{\chi }}_{{\boldsymbol{N}}_{\boldsymbol{A}\boldsymbol{C}}} \left(\begin{array}{c}\left({s}_{2}, 0.022\right), \\ \left(\begin{array}{c}0.82{e}^{i2\pi \left(0.82\right)}, \\ 0.12{e}^{i2\pi \left(0.12\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{2}, 0.032\right), \\ \left(\begin{array}{c}0.62{e}^{i2\pi \left(0.62\right)}, \\ 0.22{e}^{i2\pi \left(0.22\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{2}, 0.042\right), \\ \left(\begin{array}{c}0.7{e}^{i2\pi \left(0.7\right)}, \\ 0.2{e}^{i2\pi \left(0.2\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{1}, 0.042\right), \\ \left(\begin{array}{c}0.7{e}^{i2\pi \left(0.7\right)}, \\ 0.3{e}^{i2\pi \left(0.3\right)}\end{array}\right)\end{array}\right)
    {\bf{ử}}_{3} {\bf{ử}}_{4}
    {\boldsymbol{\chi }}_{{\boldsymbol{P}}_{\boldsymbol{A}\boldsymbol{C}}} \left(\begin{array}{c}\left({s}_{0}, 0.03\right), \\ \left(\begin{array}{c}0.7{e}^{i2\pi \left(0.7\right)}, \\ 0.1{e}^{i2\pi \left(0.1\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{4}, 0.04\right), \\ \left(\begin{array}{c}0.6{e}^{i2\pi \left(0.6\right)}, \\ 0.2{e}^{i2\pi \left(0.2\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{4}, 0.03\right), \\ \left(\begin{array}{c}0.6{e}^{i2\pi \left(0.6\right)}, \\ 0.2{e}^{i2\pi \left(0.2\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{0}, 0.04\right), \\ \left(\begin{array}{c}0.5{e}^{i2\pi \left(0.5\right)}, \\ 0.4{e}^{i2\pi \left(0.4\right)}\end{array}\right)\end{array}\right)
    {\boldsymbol{\chi }}_{{\boldsymbol{B}}_{\boldsymbol{A}\boldsymbol{C}}} \left(\begin{array}{c}\left({s}_{5}, 0.031\right), \\ \left(\begin{array}{c}0.6{e}^{i2\pi \left(0.6\right)}, \\ 0.3{e}^{i2\pi \left(0.3\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{2}, 0.041\right), \\ \left(\begin{array}{c}0.8{e}^{i2\pi \left(0.8\right)}, \\ 0.2{e}^{i2\pi \left(0.2\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{3}, 0.031\right), \\ \left(\begin{array}{c}0.61{e}^{i2\pi \left(0.61\right)}, \\ 0.21{e}^{i2\pi \left(0.21\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{1}, 0.041\right), \\ \left(\begin{array}{c}0.1{e}^{i2\pi \left(0.1\right)}, \\ 0.41{e}^{i2\pi \left(0.41\right)}\end{array}\right)\end{array}\right)
    {\boldsymbol{\chi }}_{{\boldsymbol{N}}_{\boldsymbol{A}\boldsymbol{C}}} \left(\begin{array}{c}\left({s}_{4}, 0.032\right), \\ \left(\begin{array}{c}0.7{e}^{i2\pi \left(0.7\right)}, \\ 0.2{e}^{i2\pi \left(0.2\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{1}, 0.042\right), \\ \left(\begin{array}{c}0.7{e}^{i2\pi \left(0.7\right)}, \\ 0.3{e}^{i2\pi \left(0.3\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{2}, 0.032\right), \\ \left(\begin{array}{c}0.62{e}^{i2\pi \left(0.62\right)}, \\ 0.22{e}^{i2\pi \left(0.22\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{2}, 0.042\right), \\ \left(\begin{array}{c}0.7{e}^{i2\pi \left(0.7\right)}, \\ 0.2{e}^{i2\pi \left(0.2\right)}\end{array}\right)\end{array}\right)

     | Show Table
    DownLoad: CSV
    Table 5.  Information matrix by {\mathcal{D}}_{3} .
    {\bf{ử}}_{1} {\bf{ử}}_{2}
    {\mathcal{F}}_{B} {\sim \mathcal{F}}_{NB} {\mathcal{F}}_{B} {\sim \mathcal{F}}_{NB}
    {\boldsymbol{\chi }}_{{\boldsymbol{P}}_{\boldsymbol{A}\boldsymbol{C}}} \left(\begin{array}{c}\left({s}_{0}, 0.013\right), \\ \left(\begin{array}{c}0.7{e}^{i2\pi \left(0.7\right)}, \\ 0.3{e}^{i2\pi \left(0.3\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{4}, 0.023\right), \\ \left(\begin{array}{c}0.5{e}^{i2\pi \left(0.5\right)}, \\ 0.4{e}^{i2\pi \left(0.4\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{1}, 0.033\right), \\ \left(\begin{array}{c}0.6{e}^{i2\pi \left(0.6\right)}, \\ 0.4{e}^{i2\pi \left(0.4\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{4}, 0.04\right), \\ \left(\begin{array}{c}0.6{e}^{i2\pi \left(0.6\right)}, \\ 0.2{e}^{i2\pi \left(0.2\right)}\end{array}\right)\end{array}\right)
    {\boldsymbol{\chi }}_{{\boldsymbol{B}}_{\boldsymbol{A}\boldsymbol{C}}} \left(\begin{array}{c}\left({s}_{2}, 0.014\right), \\ \left(\begin{array}{c}0.7{e}^{i2\pi \left(0.7\right)}, \\ 0.2{e}^{i2\pi \left(0.2\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{2}, 0.024\right), \\ \left(\begin{array}{c}0.7{e}^{i2\pi \left(0.7\right)}, \\ 0.3{e}^{i2\pi \left(0.3\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{2}, 0.034\right), \\ \left(\begin{array}{c}0.7{e}^{i2\pi \left(0.7\right)}, \\ 0.3{e}^{i2\pi \left(0.3\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{2}, 0.043\right), \\ \left(\begin{array}{c}0.8{e}^{i2\pi \left(0.8\right)}, \\ 0.1{e}^{i2\pi \left(0.1\right)}\end{array}\right)\end{array}\right)
    {\boldsymbol{\chi }}_{{\boldsymbol{N}}_{\boldsymbol{A}\boldsymbol{C}}} \left(\begin{array}{c}\left({s}_{0}, 0.03\right), \\ \left(\begin{array}{c}0.7{e}^{i2\pi \left(0.7\right)}, \\ 0.1{e}^{i2\pi \left(0.1\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{4}, 0.04\right), \\ \left(\begin{array}{c}0.6{e}^{i2\pi \left(0.6\right)}, \\ 0.2{e}^{i2\pi \left(0.2\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{4}, 0.03\right), \\ \left(\begin{array}{c}0.6{e}^{i2\pi \left(0.6\right)}, \\ 0.2{e}^{i2\pi \left(0.2\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{0}, 0.04\right), \\ \left(\begin{array}{c}0.5{e}^{i2\pi \left(0.5\right)}, \\ 0.4{e}^{i2\pi \left(0.4\right)}\end{array}\right)\end{array}\right)
    {\bf{ử}}_{3} {\bf{ử}}_{4}
    {\boldsymbol{\chi }}_{{\boldsymbol{P}}_{\boldsymbol{A}\boldsymbol{C}}} \left(\begin{array}{c}\left({s}_{0}, 0.013\right), \\ \left(\begin{array}{c}0.7{e}^{i2\pi \left(0.7\right)}, \\ 0.3{e}^{i2\pi \left(0.3\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{4}, 0.023\right), \\ \left(\begin{array}{c}0.5{e}^{i2\pi \left(0.5\right)}, \\ 0.4{e}^{i2\pi \left(0.4\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{1}, 0.033\right), \\ \left(\begin{array}{c}0.6{e}^{i2\pi \left(0.6\right)}, \\ 0.4{e}^{i2\pi \left(0.4\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{4}, 0.04\right), \\ \left(\begin{array}{c}0.6{e}^{i2\pi \left(0.6\right)}, \\ 0.2{e}^{i2\pi \left(0.2\right)}\end{array}\right)\end{array}\right)
    {\boldsymbol{\chi }}_{{\boldsymbol{B}}_{\boldsymbol{A}\boldsymbol{C}}} \left(\begin{array}{c}\left({s}_{0}, 0.02\right), \\ \left(\begin{array}{c}0.8{e}^{i2\pi \left(0.8\right)}, \\ 0.1{e}^{i2\pi \left(0.1\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{4}, 0.03\right), \\ \left(\begin{array}{c}0.6{e}^{i2\pi \left(0.6\right)}, \\ 0.2{e}^{i2\pi \left(0.2\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{0}, 0.04\right), \\ \left(\begin{array}{c}0.5{e}^{i2\pi \left(0.5\right)}, \\ 0.4{e}^{i2\pi \left(0.4\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{5}, 0.05\right), \\ \left(\begin{array}{c}0.7{e}^{i2\pi \left(0.7\right)}, \\ 0.3{e}^{i2\pi \left(0.3\right)}\end{array}\right)\end{array}\right)
    {\boldsymbol{\chi }}_{{\boldsymbol{N}}_{\boldsymbol{A}\boldsymbol{C}}} \left(\begin{array}{c}\left({s}_{1}, 0.021\right), \\ \left(\begin{array}{c}0.81{e}^{i2\pi \left(0.81\right)}, \\ 0.11{e}^{i2\pi \left(0.11\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{3}, 0.031\right), \\ \left(\begin{array}{c}0.61{e}^{i2\pi \left(0.61\right)}, \\ 0.21{e}^{i2\pi \left(0.21\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{1}, 0.041\right), \\ \left(\begin{array}{c}0.1{e}^{i2\pi \left(0.1\right)}, \\ 0.41{e}^{i2\pi \left(0.41\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{4}, 0.01\right), \\ \left(\begin{array}{c}0.71{e}^{i2\pi \left(0.71\right)}, \\ 0.21{e}^{i2\pi \left(0.21\right)}\end{array}\right)\end{array}\right)

     | Show Table
    DownLoad: CSV
    Table 6.  By using the Eq (6.4), we aggregate the information of Tables 3, 4, and 5.
    {\bf{ử}}_{1} {\bf{ử}}_{2}
    {\mathcal{F}}_{B} {\sim \mathcal{F}}_{NB} {\mathcal{F}}_{B} {\sim \mathcal{F}}_{NB}
    {\boldsymbol{\chi }}_{{\boldsymbol{P}}_{\boldsymbol{A}\boldsymbol{C}}} \left(\begin{array}{c}\left({s}_{0}, 0.012\right), \\ \left(\begin{array}{c}0.299{e}^{i2\pi \left(0.299\right)}, \\ 0.678{e}^{i2\pi \left(0.678\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{1.41}, 0.021\right), \\ \left(\begin{array}{c}0.275{e}^{i2\pi \left(0.275\right)}, \\ 0.675{e}^{i2\pi \left(0.675\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{0}, 0.031\right), \\ \left(\begin{array}{c}0.251{e}^{i2\pi \left(0.251\right)}, \\ 0.622{e}^{i2\pi \left(0.622\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{1.39}, 0.042\right), \\ \left(\begin{array}{c}0.274{e}^{i2\pi \left(0.274\right)}, \\ 0.66{e}^{i2\pi \left(0.66\right)}\end{array}\right)\end{array}\right)
    {\boldsymbol{\chi }}_{{\boldsymbol{B}}_{\boldsymbol{A}\boldsymbol{C}}} \left(\begin{array}{c}\left({s}_{0.72}, 0.0112\right), \\ \left(\begin{array}{c}0.293{e}^{i2\pi \left(0.293\right)}, \\ 0.632{e}^{i2\pi \left(0.632\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{0.76}, 0.022\right), \\ \left(\begin{array}{c}0.274{e}^{i2\pi \left(0.274\right)}, \\ 0.606{e}^{i2\pi \left(0.606\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{0.85}, 0.033\right), \\ \left(\begin{array}{c}0.274{e}^{i2\pi \left(0.274\right)}, \\ 0.656{e}^{i2\pi \left(0.656\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{0.74}, 0.043\right), \\ \left(\begin{array}{c}0.269{e}^{i2\pi \left(0.269\right)}, \\ 0.659{e}^{i2\pi \left(0.659\right)}\end{array}\right)\end{array}\right)
    {\boldsymbol{\chi }}_{{\boldsymbol{N}}_{\boldsymbol{A}\boldsymbol{C}}} \left(\begin{array}{c}\left({s}_{1.04}, 0.013\right), \\ \left(\begin{array}{c}0.34{e}^{i2\pi \left(0.34\right)}, \\ 0.599{e}^{i2\pi \left(0.599\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{0.18}, 0.024\right), \\ \left(\begin{array}{c}0.318{e}^{i2\pi \left(0.318\right)}, \\ 0.563{e}^{i2\pi \left(0.563\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{1.41}, 0.032\right), \\ \left(\begin{array}{c}0.297{e}^{i2\pi \left(0.297\right)}, \\ 0.657{e}^{i2\pi \left(0.657\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{0.14}, 0.042\right), \\ \left(\begin{array}{c}0.318{e}^{i2\pi \left(0.318\right)}, \\ 0.619{e}^{i2\pi \left(0.619\right)}\end{array}\right)\end{array}\right)
    {\mathrm{ử}}_{3} {\mathrm{ử}}_{4}
    {\boldsymbol{\chi }}_{{\boldsymbol{P}}_{\boldsymbol{A}\boldsymbol{C}}} \left(\begin{array}{c}\left({s}_{0}, 0.0142\right), \\ \left(\begin{array}{c}0.279{e}^{i2\pi \left(0.279\right)}, \\ 0.56{e}^{i2\pi \left(0.56\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{1.443}, 0.0221\right), \\ \left(\begin{array}{c}0.25{e}^{i2\pi \left(0.25\right)}, \\ 0.75{e}^{i2\pi \left(0.75\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{0}, 0.0321\right), \\ \left(\begin{array}{c}0.21{e}^{i2\pi \left(0.21\right)}, \\ 0.62{e}^{i2\pi \left(0.62\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{1.59}, 0.043\right), \\ \left(\begin{array}{c}0.254{e}^{i2\pi \left(0.254\right)}, \\ 0.70{e}^{i2\pi \left(0.70\right)}\end{array}\right)\end{array}\right)
    {\boldsymbol{\chi }}_{{\boldsymbol{B}}_{\boldsymbol{A}\boldsymbol{C}}} \left(\begin{array}{c}\left({s}_{0.69}, 0.013\right), \\ \left(\begin{array}{c}0.288{e}^{i2\pi \left(0.288\right)}, \\ 0.612{e}^{i2\pi \left(0.612\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{0.81}, 0.023\right), \\ \left(\begin{array}{c}0.287{e}^{i2\pi \left(0.287\right)}, \\ 0.599{e}^{i2\pi \left(0.599\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{0.81}, 0.032\right), \\ \left(\begin{array}{c}0.257{e}^{i2\pi \left(0.257\right)}, \\ 0.670{e}^{i2\pi \left(0.670\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{0.57}, 0.0434\right), \\ \left(\begin{array}{c}0.273{e}^{i2\pi \left(0.273\right)}, \\ 0.69{e}^{i2\pi \left(0.69\right)}\end{array}\right)\end{array}\right)
    {\boldsymbol{\chi }}_{{\boldsymbol{N}}_{\boldsymbol{A}\boldsymbol{C}}} \left(\begin{array}{c}\left({s}_{1.31}, 0.023\right), \\ \left(\begin{array}{c}0.349{e}^{i2\pi \left(0.349\right)}, \\ 0.615{e}^{i2\pi \left(0.615\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{0.78}, 0.029\right), \\ \left(\begin{array}{c}0.389{e}^{i2\pi \left(0.389\right)}, \\ 0.569{e}^{i2\pi \left(0.569\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{1.84}, 0.038\right), \\ \left(\begin{array}{c}0.302{e}^{i2\pi \left(0.302\right)}, \\ 0.686{e}^{i2\pi \left(0.686\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{0.35}, 0.0423\right), \\ \left(\begin{array}{c}0.319{e}^{i2\pi \left(0.319\right)}, \\ 0.617{e}^{i2\pi \left(0.617\right)}\end{array}\right)\end{array}\right)

     | Show Table
    DownLoad: CSV

    Step 2: Consider Eq (6.4), for {K}_{SC} = 3 , and the values of {\alpha }_{SC-1} = {\alpha }_{SC-2} = {\alpha }_{SC-3} = 1 , we have Table 6.

    Step 3: By using the Eqs (6.5)–(6.7), we examine the expected losses {Y}_{EL}\left({\chi }_{{j}_{AC}}\left|\right[ử]\right), j = P, B, N , for {\delta }_{B-j} = 0.4, j = \mathrm{1, 2}, 3 and {q}_{SC} = 1 , and the separate actions are expressed and discussed in Table 7.

    Table 7.  By using again the Eqs (6.5)–(6.7), we aggregate the information of Table 6.
    Symbols {\boldsymbol{Y}}_{\boldsymbol{E}\boldsymbol{L}}\left({\boldsymbol{\chi }}_{{\boldsymbol{P}}_{\boldsymbol{A}\boldsymbol{C}}}\left|\right[\boldsymbol{ử}]\right) {\boldsymbol{Y}}_{\boldsymbol{E}\boldsymbol{L}}\left({\boldsymbol{\chi }}_{{\boldsymbol{B}}_{\boldsymbol{A}\boldsymbol{C}}}\left|\right[\boldsymbol{ử}]\right) {\boldsymbol{Y}}_{\boldsymbol{E}\boldsymbol{L}}\left({\boldsymbol{\chi }}_{{\boldsymbol{N}}_{\boldsymbol{A}\boldsymbol{C}}}\left|\right[\boldsymbol{ử}]\right)
    {\boldsymbol{ử}}_{1} \left(\begin{array}{c}\left({s}_{0}, 0.1866\right), \\ \left(\begin{array}{c}0.2504{e}^{i2\pi \left(0.2504\right)}, \\ 0.6762{e}^{i2\pi \left(0.6762\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{0}, 0.1656\right), \\ \left(\begin{array}{c}0.2817{e}^{i2\pi \left(0.2817\right)}, \\ 0.6163{e}^{i2\pi \left(0.6163\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{0}, 0.1244\right), \\ \left(\begin{array}{c}0.3269{e}^{i2\pi \left(0.3269\right)}, \\ 0.5771{e}^{i2\pi \left(0.5771\right)}\end{array}\right)\end{array}\right)
    {\boldsymbol{ử}}_{2} \left(\begin{array}{c}\left({s}_{1}, 0.0044\right), \\ \left(\begin{array}{c}0.2649{e}^{i2\pi \left(0.2649\right)}, \\ 0.6445{e}^{i2\pi \left(0.6445\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{0}, 0.1346\right), \\ \left(\begin{array}{c}0.271{e}^{i2\pi \left(0.271\right)}, \\ 0.6578{e}^{i2\pi \left(0.6578\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{0}, 0.1676\right), \\ \left(\begin{array}{c}0.3097{e}^{i2\pi \left(0.3097\right)}, \\ 0.6339{e}^{i2\pi \left(0.6339\right)}\end{array}\right)\end{array}\right)
    {\boldsymbol{ử}}_{3} \left(\begin{array}{c}\left({s}_{0}, 0.1921\right), \\ \left(\begin{array}{c}0.2617{e}^{i2\pi \left(0.2617\right)}, \\ 0.6673{e}^{i2\pi \left(0.6673\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{0}, 0.1714\right), \\ \left(\begin{array}{c}0.2874{e}^{i2\pi \left(0.2874\right)}, \\ 0.6042{e}^{i2\pi \left(0.6042\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{1}, 0.025\right), \\ \left(\begin{array}{c}0.3733{e}^{i2\pi \left(0.3733\right)}, \\ 0.587{e}^{i2\pi \left(0.587\right)}\end{array}\right)\end{array}\right)
    {\boldsymbol{ử}}_{4} \left(\begin{array}{c}\left({s}_{1}, 0.0166\right), \\ \left(\begin{array}{c}0.2367{e}^{i2\pi \left(0.2367\right)}, \\ 0.6668{e}^{i2\pi \left(0.6668\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{0}, 0.172\right), \\ \left(\begin{array}{c}0.2666{e}^{i2\pi \left(0.2666\right)}, \\ 0.6819{e}^{i2\pi \left(0.6819\right)}\end{array}\right)\end{array}\right) \left(\begin{array}{c}\left({s}_{0}, 0.192\right), \\ \left(\begin{array}{c}0.3123{e}^{i2\pi \left(0.3123\right)}, \\ 0.6437{e}^{i2\pi \left(0.6437\right)}\end{array}\right)\end{array}\right)

     | Show Table
    DownLoad: CSV

    Step 4: By using the Eqs (6.8)–(6.10), we examine the expected values, which are stated in Table 8.

    Table 8.  Expected values of the information, which are shown in Table 7.
    Symbols {\boldsymbol{Q}}_{\boldsymbol{E}\boldsymbol{V}}\left({\boldsymbol{Y}}_{\boldsymbol{E}\boldsymbol{L}}\left({\boldsymbol{\chi }}_{{\boldsymbol{P}}_{\boldsymbol{A}\boldsymbol{C}}}\left|\right[\boldsymbol{ử}]\right)\right) {\boldsymbol{Q}}_{\boldsymbol{E}\boldsymbol{V}}\left({\boldsymbol{Y}}_{\boldsymbol{E}\boldsymbol{L}}\left({\boldsymbol{\chi }}_{{\boldsymbol{B}}_{\boldsymbol{A}\boldsymbol{C}}}\left|\right[\boldsymbol{ử}]\right)\right) {\boldsymbol{Q}}_{\boldsymbol{E}\boldsymbol{V}}\left({\boldsymbol{Y}}_{\boldsymbol{E}\boldsymbol{L}}\left({\boldsymbol{\chi }}_{{\boldsymbol{N}}_{\boldsymbol{A}\boldsymbol{C}}}\left|\right[\boldsymbol{ử}]\right)\right)
    {\boldsymbol{ử}}_{1} 0.0137 0.017 0.0119
    {\boldsymbol{ử}}_{2} 0.0185 0.0096 0.0095
    {\boldsymbol{ử}}_{3} 0.0136 0.0186 0.0089
    {\boldsymbol{ử}}_{4} 0.0211 0.0088 0.0085

     | Show Table
    DownLoad: CSV

    When the predictable values are unsuccessful in catching the relationships amongst any two predictable losses, we discover the concepts of accuracy function, which is quantified in Table 9.

    Table 9.  Accuracy values of the information, which are shown in Table 6.
    Symbols {\boldsymbol{G}}_{\boldsymbol{A}\boldsymbol{F}}\left({\boldsymbol{Y}}_{\boldsymbol{E}\boldsymbol{L}}\left({\boldsymbol{\chi }}_{{\boldsymbol{P}}_{\boldsymbol{A}\boldsymbol{C}}}\left|\right[\boldsymbol{ử}]\right)\right) {\boldsymbol{G}}_{\boldsymbol{A}\boldsymbol{F}}\left({\boldsymbol{Y}}_{\boldsymbol{E}\boldsymbol{L}}\left({\boldsymbol{\chi }}_{{\boldsymbol{B}}_{\boldsymbol{A}\boldsymbol{C}}}\left|\right[\boldsymbol{ử}]\right)\right) {\boldsymbol{G}}_{\boldsymbol{A}\boldsymbol{F}}\left({\boldsymbol{Y}}_{\boldsymbol{E}\boldsymbol{L}}\left({\boldsymbol{\chi }}_{{\boldsymbol{N}}_{\boldsymbol{A}\boldsymbol{C}}}\left|\right[\boldsymbol{ử}]\right)\right)
    {\boldsymbol{ử}}_{1} 0.266 0.2222 0.1555
    {\boldsymbol{ử}}_{2} 0.282 0.1867 0.2219
    {\boldsymbol{ử}}_{3} 0.27 0.2257 0.2731
    {\boldsymbol{ử}}_{4} 0.3127 0.2435 0.2656

     | Show Table
    DownLoad: CSV

    Step 5: By using the Eqs (6.11)–(6.13), we examine the three-way decision rules, as discussed in Table 10.

    Table 10.  Three-ways decision based on Eqs (6.11)–(6.13).
    Enterprises Decision rule
    {\boldsymbol{ử}}_{1} {P}_{AC-1}
    {\boldsymbol{ử}}_{2} {P}_{AC-1}
    {\boldsymbol{ử}}_{3} {P}_{AC-1}
    {\boldsymbol{ử}}_{4} {P}_{AC-1}

     | Show Table
    DownLoad: CSV

    Step 6: This ends the proof.

    The obtained result states that all these alternatives belong to positive opinions, which are {P}_{AC-1} . Furthermore, the comparison of the explored work with some existing work is discussed in Table 11.

    Table 11.  Comparative analysis for explored work with existing wor.
    Enterprises {\boldsymbol{ử}}_{1} {\boldsymbol{ử}}_{2} {\boldsymbol{ử}}_{3} {\boldsymbol{ử}}_{4}
    Proposed work for q=1 {P}_{AC-1} {P}_{AC-1} {P}_{AC-1} {P}_{AC-1}
    Proposed work for q=2 {P}_{AC-1} {P}_{AC-1} {P}_{AC-1} {P}_{AC-1}
    Proposed work for q=3 {P}_{AC-1} {P}_{AC-1} {P}_{AC-1} {P}_{AC-1}

     | Show Table
    DownLoad: CSV

    From the above analysis, we conclude that all approaches have provided similar consequences and are exposed in Table 11; additionally, all alternatives belong to the +ve regions. The graphical interpretation for the information's score and the expected positive, boundary, and negative values of Table 8 and Table 9 are shown in Figure 4 and Figure 5, respectively.

    Figure 4.  Graphical representation for the information of Table 8.
    Figure 5.  Graphical representation for the information of Table 9.

    In the explored work, if we choose the values of q = 1, 2 and the imaginary part is zero, then the explored work is reduced for intuitionistic 2-tuple and Pythagorean 2-tuple linguistic sets. Similarly, if we choose the values of q = 1, 2, then the explored will be reduced for complex intuitionistic 2-tuple and complex Pythagorean 2-tuple linguistic sets. The presented approach is more powerful and more proficient than the existing ones, as given in [34,35,36,37].

    We modified the notions of 3WD and DTRS in the environment of CQRO2-TLV and elaborated certain important properties. Moreover, GMSM is a dominant and more flexible method to determine the accuracy and dominancy of real life issues. Therefore, by considering the CQRO2-TL information and GMSM, we presented CQRO2-TLGMSM operator and the WCQRO2-TLGMSM operator, and demonstrated their effective properties. We also elaborated a q-rung orthopair 2-tuple linguistic DTRS model and discussed its applications. Furthermore, we discussed some important and well known properties of the defined aggregation operators like idempotency, commutativity, monotonicity, and boundedness. We discussed some examples to explain our proposed methods and techniques. To prove the authenticity, workability, effectiveness, and supremacy of our proposed methods, techniques, and notions, we initiated a comparative analysis and proved that our initiated notions are much better as compared to certain existing notions.

    In the future, we will discuss the proposed notions for CQRO2-TLVSs in the framework of complex q-rung orthopair fuzzy sets [38], picture hesitant fuzzy sets [39,40], neutrosophic sets [41,42], and some more useful frameworks and notions, as given in [43,44,45,46,47,48,49].

    Researchers Supporting Project number (RSP-2021/244), King Saud University, Riyadh, Saudi Arabia.

    The authors declare no conflicts of interest about the publication of the research article.



    [1] N. U. Prabhu, On the ruin problem of collective risk theory, Ann. Math. Stat., 32 (1961), 757–764. https://doi.org/10.1214/aoms/1177704970 doi: 10.1214/aoms/1177704970
    [2] M. I. Taksar, Optimal risk and dividend distribution control models for an insur- ance company, Math. Methods Oper. Res., 51 (2000), 1–42. https://doi.org/10.1007/s001860050001 doi: 10.1007/s001860050001
    [3] W. Yu, P. Guo, Q. Wang, G. Guan, Q. Yang, Y. Huang, et al., On a periodic capital injection and barrier dividend strategy in the compound Poisson risk model, Mathematics, 8 (2020), 511. https://doi.org/10.3390/math8040511 doi: 10.3390/math8040511
    [4] H. U. Gerber, E. S. W. Shiu, On optimal dividend strategies in the compound Poisson model, N. Am. Actuar. J., 10 (2006), 76–93. https://doi.org/10.1080/10920277.2006.10596249 doi: 10.1080/10920277.2006.10596249
    [5] H. Yuan, Y. Hu, Optimal investment for an insurer under liquid reserves, J. Ind. Manage. Optim., 17 (2020), 339–355. https://doi.org/10.3934/jimo.2019114 doi: 10.3934/jimo.2019114
    [6] B. Sundt, J. L. Teugels, Ruin estimates under interest force, Insur. Math. Econ., 16 (1995), 7–22. https://doi.org/10.1016/0167-6687(94)00023-8 doi: 10.1016/0167-6687(94)00023-8
    [7] Y. Fang, R. Wu, Optimal dividend strategy in the compound poisson model with constant interest, Stoch. Models, 23 (2007), 149–166. https://doi.org/10.1080/15326340601142271 doi: 10.1080/15326340601142271
    [8] J. Cai, R. Feng, G. E. Willmot, Analysis of the compound Poisson surplus model with liquid reserves, interest and dividends, ASTIN Bull., 39 (2009), 225–247. https://doi.org/10.2143/AST.39.1.2038063 doi: 10.2143/AST.39.1.2038063
    [9] L. Yang, C. He, Absolute ruin in the compound Poisson model with credit and debit interests and liquid reserves, Appl. Stoch. Models Bus. Ind., 30 (2014), 157–171. https://doi.org/10.1002/asmb.1953 doi: 10.1002/asmb.1953
    [10] X. Chen, H. Ou, A compound Poisson risk model with proportional investment, J. Comput. Appl. Math., 242 (2013), 248–260. https://doi.org/10.1016/j.cam.2012.10.027 doi: 10.1016/j.cam.2012.10.027
    [11] C. Yin, K. C. Yuen, Optimality of the threshold dividend strategy for the compound poisson model, Stat. Probab. Lett., 81 (2011), 1841–1846. https://doi.org/10.1016/j.spl.2011.07.022 doi: 10.1016/j.spl.2011.07.022
    [12] X. S. Lin, K. P. Pavlova, The compound Poisson risk model with a threshold dividend strategy, Insur. Math. Econ., 38 (2006), 57–80. https://doi.org/10.1016/j.insmatheco.2005.08.001 doi: 10.1016/j.insmatheco.2005.08.001
    [13] N. Wan, Dividend payments with a threshold strategy in the compound poisson risk model perturbed by diffusion, Insur. Math. Econ., 40 (2007), 509–523. https://doi.org/10.1016/j.insmatheco.2006.08.002 doi: 10.1016/j.insmatheco.2006.08.002
    [14] B. De Finetti, Su un'impostazione alternativa della teoria collettiva del rischio, In: Transactions of the XVth international congress of Actuaries, 2 (1957), 433–443.
    [15] J. Paulsen, H. K. Gjessing, Ruin theory with stochastic return on investments, Adv. Appl. Probab., 29 (1997), 965–985. https://doi.org/10.2307/1427849 doi: 10.2307/1427849
    [16] X. S. Lin, K. P. Sendova, The compound Poisson risk model with multiple thresholds, Insur. Math. Econ., 42 (2008), 617–627. https://doi.org/10.1016/j.insmatheco.2007.06.008 doi: 10.1016/j.insmatheco.2007.06.008
    [17] Z. Zhang, X. Han, The compound Poisson risk model under a mixed dividend strategy, Appl. Math. Comput., 315 (2017), 1–12. https://doi.org/10.1016/j.amc.2017.07.048 doi: 10.1016/j.amc.2017.07.048
    [18] Y. Zhang, L. Mao, B. Kou, A perturbed risk model with liquid reserves, credit and debit interests and dividends under absolute ruin, In: Advances in computational science and computing, 2019. https://doi.org/10.1007/978-3-030-02116-0_40
    [19] D. Peng, D. Liu, Z. Hou, Absolute ruin problems in a compound Poisson risk model with constant dividend barrier and liquid reserves, Adv. Differ. Equ., 2016 (2016), 72. https://doi.org/10.1186/s13662-016-0746-1 doi: 10.1186/s13662-016-0746-1
    [20] F. Dufresne, H. U. Gerber, Risk theory for the compound Poisson process that is perturbed by diffusion, Insur. Math. Econ., 10 (1991), 51–59. https://doi.org/10.1016/0167-6687(91)90023-Q doi: 10.1016/0167-6687(91)90023-Q
    [21] E. T. Whitaker, On the functions which are represented by the expansion of interpolating theory, Proc. Roy. Soc. Edinb., 35 (1915), 181–194. https://doi.org/10.1017/S0370164600017806 doi: 10.1017/S0370164600017806
    [22] T. Carlson, J. Dockery, J. Lund, A sinc-collocation method for initial value problem, Math. Comput., 66 (1997), 215–235.
    [23] T. Okayama, Error estimates with explicit constants for the Sinc approximation over infinite intervals, Appl. Math. Comput., 319 (2018), 125–137. https://doi.org/10.1016/j.amc.2017.02.02 doi: 10.1016/j.amc.2017.02.02
    [24] K. Maleknejad, K. Nedaiasl, Application of Sinc-collocation method for solving a class of nonlinear Fredholm integral equations, Appl. Comput. Math. Appl., 62 (2011), 3292–3303. https://doi.org/10.1016/j.camwa.2011.08.045 doi: 10.1016/j.camwa.2011.08.045
    [25] C. Wang, N. Deng, S. Shen, Numerical method for a perturbed risk model with proportional investment, Mathematics, 11 (2022), 43. https://doi.org/10.1016/j.camwa.2011.08.045 doi: 10.1016/j.camwa.2011.08.045
    [26] F. Stenger, Numerical methods based on sinc and analytic functions, New York: Springer-Verlag, 1993. https://doi.org/10.1007/978-1-4612-2706-9
    [27] F. Stenger, Handbook of sinc numerical methods, Boca Raton: CRC Press, 2011. https://doi.org/10.1201/b10375
    [28] J. Lund, K. L. Bowers, Sinc methods for quadrature and differential equations, SIAM, 1992.
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1062) PDF downloads(67) Cited by(1)

Figures and Tables

Figures(8)  /  Tables(3)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog