Processing math: 72%
Research article

Robust kernel regression function with uncertain scale parameter for high dimensional ergodic data using k-nearest neighbor estimation

  • In this paper, we consider a new method dealing with the problem of estimating the scoring function γa, with a constant a, in functional space and an unknown scale parameter under a nonparametric robust regression model. Based on the k Nearest Neighbors (kNN) method, the primary objective is to prove the asymptotic normality aspect in the case of a stationary ergodic process of this estimator. We begin by establishing the almost certain convergence of a conditional distribution estimator. Then, we derive the almost certain convergence (with rate) of the conditional median (scale parameter estimator) and the asymptotic normality of the robust regression function, even when the scale parameter is unknown. Finally, the simulation and real-world data results reveal the consistency and superiority of our theoretical analysis in which the performance of the kNN estimator is comparable to that of the well-known kernel estimator, and it outperforms a nonparametric series (spline) estimator when there are irrelevant regressors.

    Citation: Fatimah Alshahrani, Wahiba Bouabsa, Ibrahim M. Almanjahie, Mohammed Kadi Attouch. Robust kernel regression function with uncertain scale parameter for high dimensional ergodic data using k-nearest neighbor estimation[J]. AIMS Mathematics, 2023, 8(6): 13000-13023. doi: 10.3934/math.2023655

    Related Papers:

    [1] Jamalud Din, Muhammad Shabir, Samir Brahim Belhaouari . A novel pessimistic multigranulation roughness by soft relations over dual universe. AIMS Mathematics, 2023, 8(4): 7881-7898. doi: 10.3934/math.2023397
    [2] Rukchart Prasertpong . Roughness of soft sets and fuzzy sets in semigroups based on set-valued picture hesitant fuzzy relations. AIMS Mathematics, 2022, 7(2): 2891-2928. doi: 10.3934/math.2022160
    [3] R. Mareay, Radwan Abu-Gdairi, M. Badr . Soft rough fuzzy sets based on covering. AIMS Mathematics, 2024, 9(5): 11180-11193. doi: 10.3934/math.2024548
    [4] Rizwan Gul, Muhammad Shabir, Tareq M. Al-shami, M. Hosny . A Comprehensive study on (α,β)-multi-granulation bipolar fuzzy rough sets under bipolar fuzzy preference relation. AIMS Mathematics, 2023, 8(11): 25888-25921. doi: 10.3934/math.20231320
    [5] Feng Feng, Zhe Wan, José Carlos R. Alcantud, Harish Garg . Three-way decision based on canonical soft sets of hesitant fuzzy sets. AIMS Mathematics, 2022, 7(2): 2061-2083. doi: 10.3934/math.2022118
    [6] Amal T. Abushaaban, O. A. Embaby, Abdelfattah A. El-Atik . Modern classes of fuzzy α-covering via rough sets over two distinct finite sets. AIMS Mathematics, 2025, 10(2): 2131-2162. doi: 10.3934/math.2025100
    [7] Jia-Bao Liu, Rashad Ismail, Muhammad Kamran, Esmail Hassan Abdullatif Al-Sabri, Shahzaib Ashraf, Ismail Naci Cangul . An optimization strategy with SV-neutrosophic quaternion information and probabilistic hesitant fuzzy rough Einstein aggregation operator. AIMS Mathematics, 2023, 8(9): 20612-20653. doi: 10.3934/math.20231051
    [8] Tareq M. Al-shami, Salem Saleh, Alaa M. Abd El-latif, Abdelwaheb Mhemdi . Novel categories of spaces in the frame of fuzzy soft topologies. AIMS Mathematics, 2024, 9(3): 6305-6320. doi: 10.3934/math.2024307
    [9] Ahmad Bin Azim, Ahmad ALoqaily, Asad Ali, Sumbal Ali, Nabil Mlaiki, Fawad Hussain . q-Spherical fuzzy rough sets and their usage in multi-attribute decision-making problems. AIMS Mathematics, 2023, 8(4): 8210-8248. doi: 10.3934/math.2023415
    [10] Admi Nazra, Jenizon, Yudiantri Asdi, Zulvera . Generalized hesitant intuitionistic fuzzy N-soft sets-first result. AIMS Mathematics, 2022, 7(7): 12650-12670. doi: 10.3934/math.2022700
  • In this paper, we consider a new method dealing with the problem of estimating the scoring function γa, with a constant a, in functional space and an unknown scale parameter under a nonparametric robust regression model. Based on the k Nearest Neighbors (kNN) method, the primary objective is to prove the asymptotic normality aspect in the case of a stationary ergodic process of this estimator. We begin by establishing the almost certain convergence of a conditional distribution estimator. Then, we derive the almost certain convergence (with rate) of the conditional median (scale parameter estimator) and the asymptotic normality of the robust regression function, even when the scale parameter is unknown. Finally, the simulation and real-world data results reveal the consistency and superiority of our theoretical analysis in which the performance of the kNN estimator is comparable to that of the well-known kernel estimator, and it outperforms a nonparametric series (spline) estimator when there are irrelevant regressors.



    Set theory is a crucial concept for the study of fundamental mathematics. In classical set theory, however, a set is determined solely by its elements. That is, the concept of a set is exact. For example, the set of even integers is exact because every integer is either odd or even. Nevertheless, In our daily lives, we encounter various problems involving inaccuracies. As an example, young men are imprecision because we can not classify all youngsters into two different classes: young men and older men. Thus the youngsters are not exact but a vague concept. The classic set requires precision for all mathematics. For this reason, imprecision is essential to computer scientists, mathematicians, and philosophers interested in the problems of containing uncertainty. There exist many theories to deal with imprecisions, such as vague set theory, probability theory, intuitionistic FS, and interval mathematics; these theories have their merits and demerits.

    Zadeh [61] invented the concept of an FS, which was the first successful response to vagueness. In this approach, the sets are established by partial membership, apart from the classical set, where exact membership is required. It can cope with issues containing ambiguities and resolve DM problems, which are frequently proper techniques for characterizing ambiguity, but this theory its own set of problems, as discussed in [33].

    It was in 1999 that Molodtsov [33] introduced a new mathematical approach for dealing with imprecision. This new approach is known as SST, which is devoid of difficulties occurring in existing theories. SST has more comprehensive practical applications. SST's first practical applications were presented by Maji et al. [30,31], and they also defined several operations and made a theoretical study on SST. Ali et al. [1] provided some new SS operations and refined the notion of a SS complement. It is common for SST parameters to be vague words. To solve these problems, A FSS is defined by Maji et al. [32] as an amalgamation of SS and FS. Real-world DM problems can be solved with FSS. A problem-solving method is proposed based on FSS theory was discussed by Roy and Maji in [40], an interval-valued FSS is presented by Yang et al. in [58], and a DM problem is investigated using the interval-valued FSS. In a recent study, Bhardwaj et al. [6] described an advanced uncertainty measure that was based on FSS, as well as the application of this measure to DM problems. More applications of a SS and FS can be found in [12,13,51]

    RST, proposed by Pawlak in 1982 [37] is another mathematical approaches to deal with problems that contains imprecision. RST is a typical method to deal with imprecision. It is not a replacement for classical set theory, Like FST, but rather an integrated part of it. No extra or preliminary data knowledge is required for RST, like statistical probability, which is the advantage of RST. RST has found many useful applications. Particularly in the areas of collecting data and artificial intelligence, DM. The RS technique looks to be of fundamental relevance to cognitive sciences [37,38]. It appears fundamental to knowledge creation from databases, pattern recognition, inductive reasoning, and expert systems. Pawlak's RST is based on partition or equivalence relation. Due to the fact that it can only handle entire data, such a partition has limitations for many applications. To handle these problems, the equivalence relation is replaced by similarity relations, tolerance relations, neighborhood systems, general binary relations, and others. ST, FS, and RST were merged by Feng et al. [15]. [2,41] examines the RSS and SRS, The SBr and knowledge bases approximation of RS was discussed by Li et al. [26]. SRFS and SFRS were discussed by Meng et al. [34], novel FRS models are presented by Zhang et al. [63] and applied to MCGDM, using picture FS and RS theory Sahu et al. [49] present a career selection method for students, a built-in FUCOM-Rough SAW supplier selection model is presented by Durmić et al. [9]. Dominance Based Rough Set Theory was used by Sharma et al. in their recent study [50] to select criteria and make hotel decisions, and A rough-MABAC-DoE-based metamodel for iron and steel supplier selection was developed by Chattopadhyay et al. [7], Fariha et al. [15] presented A novel decision making method based on rough fuzzy information. Multi-criteria decision-making methods under soft rough fuzzy knowledge were discussed by Akram et al. [5]. An application of generalized intuitionistic fuzzy soft sets to renewable energy source selection was discussed by Khan et al. [22]. FS and RS are combined in[8], Xu et al. [54] discussed FRS model over dual universes. Some generalization of FSs and soft sets along with their applications can be seen in [21,35,36]

    The original RS model depended on a single equivalence relation. In many actual circumstances, when dealing with data with multiple granulations, this might be problematic. Aiming to deal with these problems, Qian et al. [39] presented an MGRS model to approximate the subset of a universe with respect to multi-equivalence relations instead of single-equivalence relations. It establishes a new study direction in RS in the multi-granulation model. The MGRS has attracted a substantial number of scholars from all over the world who have considerably contributed to its development and applications. Depending on the specific ordered and tolerance relations, Xu et al. [56] discussed two types of MGRSs, [57] incorporates a reference to FMGRS, Multi-granulation rough set from the crisp to the fuzzy case was defined by Yang et al. [60]. Ali et al. [3] described improved types of dominance-based MGRS and its applications in conflict analysis problems, Two new MGRS types were introduced by Xu et al. [55], Neighborhood-based MGRS was discussed by Lin, et al. [27], Liu et al. [28] discuss regarding MGCRS. According to Kumar et al., [20] an optimistic MGRS-based classification for medical diagnostics had been proposed. Huang et al. [19] defined intuitionistic FMGRSs by combining the concepts of MGRS and intuitionistic FS.

    There are numerous practical problems, including disease symptoms and drugs used in disease diagnosis, comprise a diverse universe of objects. The original rough set model deals with the problems that arise in one universe of objects. In order to solve the problems with the rough set that exist in the single universe of objects, Regarding the development of the relationship between the single-universe and dual universe models, Liu [24], Yan et al. [59] introduced a generalised RS model over dual universes of objects rather than a single universe of objects. Ma and Sun [29], developed the probabilistic RS across dual universes to quantify knowledge uncertainty, The graded RS model over dual universes and its characteristics were described by Liu et al. [25], the reduction of an information system using an SBr-based approximation of a set across dual universes was presented by Shabir et al. [42], a dual universe FRS based on interval data is presented by Zhang et al. [62], Wu et al. [53] developed the FR approximation of a set across dual universes, MGRS was presented over dual universes of objects by Sun et al. [44]. MGRS over dual universe is a well-organized framework for addressing a variety of DM problems. Moreover, it has grown in popularity among experts in multiple decision problems, attracts a wide range of theoretical and empirical studies. Zhang et al. [64] presented PFMGRS over dual universes and how it may be used in mergers and acquisitions. A decision-theoretic RS over dual universes based on MGFs for DMs and three-way GDMs was described by Sun et al. [45,46]. Din et al. [10] recently presented the PMGRFS over dual universes. Further applications in GDM of MG over dual universes can be found in [47,48]. For steam turbine fault diagnosis, Zhang et al. [65] presented the FMGRS over two universes, and Tan et al. [52] demonstrated decision-making with MGRS over two universes and granulation selection.

    The concept of MGRS was introduced by Qian et al. in [39] through the utilization of multiple equivalence relations rhoi on a mathfrakU universe. Sun et al. [44], replace multi equivalence relations ρi, by general binary relation ρi universal U×V, it was the more general from MG, also discussed OMRGRS and PMGRS over dual universes. On the other hands, Shabir et al. [11,43], generalized these notions of MGRS and replace relations by SBr on U×V. It was a very interesting generalization of multigranulation roughness, but there was an issue that it does not hold some properties like the lower approximation not contained in upper approximation. Secondly, the roughness of the crisp set and the result we got two soft sets. The question is how to rank an object. We suggest multigranulation roughness of a fuzzy set to address this query. Because of the above study, we present a novel optimistic MGR of an FS over dual universes.

    The major contribution of this study is:

    ● We extend the notion of MGRS to MGR of FS based on SBrs over two universes to approximate an FS μF(V) by using the aftersets of SBr and approximate an FS γF(U) by using foresets of SBrs. After that, we look into certain algebraic aspects of our suggested model.

    ● The Accuracy measure are discussed to measure the exactness or roughness of the proposed MGRFS model.

    ● Two algorithms are defined for discission-making and discuss an example from an applications point of view.

    The remaining of the article is organised as follows. The fundamental idea of FS, Pawlak RS, MGRSs, SBr, and FSS is recalled in Section 2. The optimistic multi-granulation roughness of a fuzzy set over dual universes by two soft binary relation, as well as its fundamental algebraic features and examples, are presented in Section 3. The optimistic multi-granulation roughness of a fuzzy set over two universes by multi soft binary relations is presented in Section 4 along with some of their fundamental algebraic features. The accuracy measurements for the presented optimistic multigranulation fuzzy soft set are shown in Section 5. We concentrate on algorithms and a few real-world examples of decision-making problems in Section 6. Finally, we conclude the paper in Section 7.

    Basic concepts for the FS, RS, MGRS, SS, SBR, and FSS are presented in this section; these concepts will all be used in later sections.

    Definition 2.1. [61] The set {(w,μ(w)): For each wW} is called fuzzy set in W, where μ:W[0,1], where W A membership function μ:W[0,1] is called as a FS, where W set of objects. Let μ and μ1 be two FS in W. Then μμ1 if μ(w)μ1(w), for all wW. Moreover, μ=μ1 if μμ1 and μμ1. If μ(w)=1 for all wW, then the μ is called a whole FS in W. The null FS and whole FS are usually denoted by 0 and 1 respectively.

    Definition 2.2. [61] The intersection and union of two fuzzy sets μ and γ in W are defined as follows:

    γμ=γ(w)μ(w),γμ=γ(w)μ(w),

    for all wW. Where and mean minimum andmaximum respectively.

    Definition 2.3. [61] The set μα={wW:μ(w)α} is known as the αcut of a FS μ in W. Where 1α0.

    Example 2.1. Let U={u1,u2,u3,u4,u5}, and the membership mapping μ:U[0,1] defined by μ=0.5u1+0.2u2+0.1u3+0.7u4+1u5 is called the FS in U.

    Let α=0.3. Then the set μα={u1,u4,u5} is known is α-cut or level set of FS μ in U.

    Definition 2.4. [37] The set {wW | [w]ψM} and {wW | [w]ψM} is known as the Pawlak lower, and upper approximations for any MW, we denoted by ψ_(M) and ¯ψ(M) respectively, where wpsi is the equivalence class of w w.r.t psi and psi is an equivalence relation on W. The set ¯ψ(M)ψ_(M), is called boundary region of M. If BNψ(M)= then we say that M is definable (exact). Otherwise, M is rough with respect to ψ. Define the accuracy measure by αψ(M)=|ψ_(M)||¯ψ(M)| and roughness measure by αψ(M)=1αψ(M) in order to determine how accurate a set M is.

    Qian et al. [39] enlarged the Pawlak RS model into an MGRS model, where the set approximations are establish by multi equivalence relations.

    Definition 2.5. [39] Let ˆψ1,ˆψ2,,ˆψj be j equivalence relations on a universal set W and MW. Then the lower approximation and upper approximation of M are defined as

    M_ji=1ˆψi={wW | [w]ˆψiM for some i,1ij},¯Mji=1ˆψi=(Mc_ji=1ˆψi)c.

    Definition 2.6. [33] A SS over W is defined as a pair (/psi,A) where /psi is a mapping with ψ:→P(W), W finite set, and AE(set of parameters).

    Definition 2.7. [14] A SS (ψ,A) over W×W. is called a SBr on W, and we denoted by SBr(W).

    Example 2.2. Let U={u1,u2,u3,u4,u5} represents some students and A={e1,ee} is the set of perimeters, where e1 represent math and e2 represent computer. Then the mapping ψ:AP(U) defined by ψ(e1)={u1,u4,u5} and ψ(e2)={u2,u3,u4}, is called soft set over U.

    Li et al. [23] present the notion of SBr in a more general form and define a GSBr from W to V, as follows.

    Definition 2.8. [23] If (ψ,A) is a SS over W×V, that is ψ:AP(W×V), then (ψ,A) is said to be a SBr (SB-relation) on W×V. and we denoted by SBr(W,V).

    Definition 2.9. [40] Let F(W) be the set of all FS on W. Then the pair (ψ,A) is known as FSS over W, where AE (set of parameters) and ψ:AF(W).

    Definition 2.10. [40] Let (ψ1,A),(ψ2,B) be two FSS over a common universe, (ψ1,A) is a FS subset of (ψ2,B) if BA and ψ1(e) is a FS subset of ψ2(e) for each eA. The FSS (ψ1,A) and (ψ2,B) are equal if and only if (ψ1,A) is a FS subset of (ψ2,B) and (ψ2,B) is a FS subset of (ψ1,A).

    This section examines the optimistic roughness of an FS utilising two SBr from U to V. We then utilize aftersets and foresets of SBr to approximation an FS of universe V in universe U and an FS of universe U in universe V, respectively. Because of this, we have two FSS, one for each FS in V(U).

    Definition 3.1. Let μ be a FS in V and ψ1 and ψ2, be SBr over U×v The optimistic lower approximation (OLAP) ψ1+ψ2_μo and optimistic upper approximation (OUAP) o¯ψ1+ψ2μ, of FS μ w.r.t aftersets of ψ1 and ψ2 are defined as:

    ψ1+ψ2_μo(e)(a)={{μ(b): b(aψ1(e)aψ2(e)),bV}, if aψ1(e)aψ2(e)0, otherwise.o¯ψ1+ψ2μ(e)(a)={{μ(b): b(aψ1(e)aψ2(e)),bV}, if aψ1(e)aψ2(e)0, otherwise.

    Where aψ1(e)={bV:(a,b)ψ1(e)},aψ2(e)={bV:(a,b)ψ2(e)} are aftersets of a for aU and eA. Obviously, (ψ1+ψ2_μo,A) and ( o¯ψ1+ψ2μ,A) are two FSS over U.

    Definition 3.2. Let γ be a FS in U, and ψ1 and ψ2, be SBr over U×v The optimistic lower approximation (OLAP) γψ1+ψ2_o and optimistic upper approximation (OUAP) γ¯ψ1+ψ2o, of FS γ w.r.t foresets of ψ1 and ψ2 are defined as:

    γψ1+ψ2_o(e)(b)={{γ(a) : a(ψ1(e)(b)ψ2(e)(b)),aU}, if ρ1(e)(b)ψ2(e)(b)0, otherwise.γ¯ψ1+ψ2o(e)(b)={{γ(a) : a(ψ1(e)(b)ψ2(e)(b)),aU}, if  ψ1(e)(b)ψ2(e)(b)0, otherwise.

    Where ψ1(e)b={aU:(a,b)ψ1(e)},ψ2(e)b={aU:(a,b)ψ2(e)} are foresets of b for bV and eA.

    Obviously, ( γψ1+ψ2_o,A) and ( γ¯ψ1+ψ2o,A) are two FSS over V.

    Moreover, ψ1+ψ2_λo:AF(U),o¯ψ1+ψ2λ:AF(U) and γψ1+ψ2_o:AF(V),γ¯ψ1+ψ2o:AF(V) and we say (U,V,{ψ1,ψ2}) a generalized Soft Approximation Space.

    Example 3.1. There are fifteen excellent allrounders who are eligible for the tournament, divided into the Platinum and Diamond categories. A franchise mathfrakXYZ wants to choose one of these players as their finest all-around player, the Platinum Group players are represented by the Set U={a1,a2,a3,a4,a5,a6,a7,a8} and the diamond Group players are represented by the Set V={b1,b2,b3,b4,b5,b6,b7}. Suppose A={e1, e2} is the set of parameters, where e1 stands for the batsman and e2 for the bowler. Let two distinct coaching teams evaluate and contrast these players based on how they performed in the various leagues they played in throughout the world, from these comparisons, we have,

    The firstteam coaches comparison, ψ1:A P(U×V), is represented by
    ψ1(e1)={(a1,b2),(a1,b3),(a2,b2),(a2,b5),(a3,b4),(a3,b5),(a4,b1),(a4,b3),(a5,b1),(a5,b6),(a7,b4)(a7,b7)},ψ1(e2)={(a1,b3),(a1,b6),(a2,b1),(a2,b4),(a3,b1),(a4,b5),(a4,b7),(a5,b2),(a5,b7),(a7,b3),(a7,b6),(a8,b1),(a8,b7)},

    where ψ1(e1) compare the batting performance of players and ψ1(e2) compare the bowling performance of players.

    The secondteam of coaches comparison, ψ2:AP(U×V), is represented by
    ψ2(e1)={(a1,b2),(a2,b3),(a2,b5),(a3,b4),(a4,b3),(a4,b5),(a4,b6),(a5,b4),(a6,b7),(a7,b3),(a7,b7)(a8,b2),(a8,b5)},ψ2(e2)={(a1,b3),(a1,b4),(a2,b3),(a2,b4),(a2,b7),(a3,b1),(a3,b6),(a4,b2),(a4,b4),(a5,b2),(a6,b5),(a7,b6),(a8,b1),(a8,b3)},

    where ψ1(e1) compare the batting performance of players and ψ1(e2) compare the bowling performance of players.

    We obtain two SBrs from U to V from these comparisons. Now the aftersets are

    a1ψ1(e1)={b2,b3},a1ψ1(e2)={b3,b6},a1ψ2(e1)={b2},a1ψ2(e2)={b3,b4}a2ψ1(e1)={b2,b5},a2ψ1(e2)={b1,b4},a2ψ2(e1)={b3,b5},a2ψ2(e2)={b3,b4,b7}a3ψ1(e1)={b4,b5},a3ψ1(e2)={b1},a3ψ2(e1)={b4},a3ψ2(e2)={b1,b6}a4ψ1(e1)={b1,b3},a4ψ1(e2)={b5,b7},a4ψ2(e1)={b3,b5,b6},a4ψ2(e2)={b2,b4}a5ψ1(e1)={b1,b6},a5ψ1(e2)={b2,b7},a5ψ2(e1)={b4},a5ψ2(e2)={b2}a6ψ1(e1)=,a6ψ1(e2)=,a6ψ2(e1)={b7},a6ψ2(e2)={b5}a7ψ1(e1)={b4,b7},a7ψ1(e2)={b3,b6},a7ψ2(e1)={b3,b7},a7ψ2(e2)={b6}a8ψ1(e1)=,a8ψ1(e2)={b1,b7},a8ψ2(e1)={b2,b5},a8ψ2(e2)={b1}.

    All the players in the diamond group whose batting performance is similar to ai are represented by aiψj(e1), and all the players in the diamond group whose bowling performance is similar to ai] are represented by aiψj(e2). And foresets are

    ψ1(e1)b1={a4,a5},ψ1(e2)b1={a2,a3,a8},ψ2(e1)b1=,ψ2(e2)b1={a3,a8}ψ1(e1)b2={a1,a2},ψ1(e2)b2={a5},ψ2(e1)b2={a8},ψ2(e2)b2={a4,a5}ψ1(e1)b3={a1,a4},ψ1(e2)b3={a7},ψ2(e1)b3={a2,a4,a7},ψ2(e2)b3={a1,a2}ψ1(e1)b4={a7},ψ1(e2)b4={a2},ψ2(e1)b4={a3,a5},ψ2(e2)b4={a1,a4}ψ1(e1)b5={a2,a3},ψ1(e2)b5={a4},ψ2(e1)b5={a2,a4,a8},ψ2(e2)b5={a6}ψ1(e1)b6={a5},ψ1(e2)b6={a1,a7},ψ2(e1)b6={a4},ψ2(e2)b6={a3,a7}ψ1(e1)b7={a7},ψ1(e2)b7={a4,a5,a8},ψ2(e1)b7={a6,a7},ψ2(e2)b7={a2}.

    All the players in the platinum group whose batting performance is similar to bi are represented by ψj(e1)bi, and all the players in the platinum group whose bowling performance is similar to bi are represented by ψj(e2)bi.

    Define μ:V[0,1], which represents the preference of the players given by franchise XYZ such that

    μ(b1)=0.9,μ(b2)=0.8,μ(b3)=0.4,μ(b4)=0,μ(b5)=0.3,μ(b6)=0.1,μ(b7)=1 and

    define γ:U[0,1], which represents the preference of the players given by franchise XYZ such that

    γ(a1)=0.2,γ(a2)=1,γ(a3)=0.5, γ(a4)=0.9,γ(a5)=0.6,γ(a6)=0.7, γ(a7)=0.1,γ(a8)=0.3.

    Therefore the optimistic lower and upper approximations of μ (with respect to the aftersets of ψ1 and ψ2) are:

    a1 a2 a3 a4 a5 a6 a7 a8
    ψ1+ψ2_μo(e1) 0.4 0.3 0 0.1 0 1 0 0.3
    o¯ψ1+ψ2μ(e1) 0.8 0.3 0 0.4 0 0 1 0
    ψ1+ψ2_μo(e2) 0 0 0.1 0 0.8 0.6 0.1 0.9
    o¯ψ1+ψ2μ(e2) 0.4 0 0.9 0 0.8 0 0.1 0.9

    Hence, ψ1+ψ2_μo(ei)(ai) provide the exact degree of the performance of the player ai to μ as a batsman and bowler and, o¯ψ1+ψ2μ(ei)(ai) provide the possible degree of the performance of the player ai to μ as a batsman and bowler w.r.t aftersets. And the (OLAP) and (OUAP) of γ (w.r.t the foresets of ψ1 and ψ2) are:

    b1 b2 b3 b4 b5 b6 b7
    γψ1+ψ2_o(e1) 0.6 0.2 0.1 0.1 0.3 0.6 0.1
    γ¯ψ1+ψ2o(e1) 0 0 0.9 0 1 0 0.1
    γψ1+ψ2_o(e2) 0.3 0.6 0.1 0.2 0.7 0.1 0.3
    γ¯ψ1+ψ2o(e2) 0.5 0.9 0 0 0 0.1 0

    Hence, γψ1+ψ2_o(ei)(bi) provide the exact degree of the performance of the player bi to γ as a batsman and bowler and, γ¯ψ1+ψ2o(e2)(bi) provide the possible degree of the performance of the player bi to γ as a batsman and bowler w.r.t foresets.

    The following result demonstrates a relationship between our proposed OMGFRS model and PMGFRS model proposed by Din et al. [10], which reflects that the proposed model is entirely different from Din et al's [10] approach.

    Proposition 3.1. Let ψ1 and ψ2, be two SBrs U×V, that is ψ1:AP(U×V) and ψ2:AP(U×V) and μF(V). Then the following hold w.r.t the aftersets.

    (1) ψ1+ψ2_μoψ1+ψ2_μp

    (2) o¯ψ1+ψ2μp¯ψ1+ψ2μ

    (3) ψ1+ψ2_μco=(p¯ψ1+ψ2μ)c

    (4) o¯ψ1+ψ2μc=(ψ1+ψ2_μp)c.

    Proof. (1) Consider ψ1+ψ2_μo(e)(a)={μ(b):baψ1aψ2}{μ(b):baψ1aψ2}=ψ1+ψ2_μp(e)(a). Hence ψ1+ψ2_μoψ1+ψ2_μp.

    (2) Consider o¯ψ1+ψ2μ(e)(a)={μ(b):baψ1aψ2}{μ(b):baψ1aψ2}=p¯ψ1+ψ2μ(e)(a). Hence o¯ψ1+ψ2μp¯ψ1+ψ2μ.

    (3) Consider ψ1+ψ2_μco(e)(a)={μc(b):baψ1aψ2}={(1μ)(b):baψ1aψ2} =1{μc(b):baψ1aψ2}=(p¯ψ1+ψ2μ(e)(a))c. Hence ψ1+ψ2_μco=(p¯ψ1+ψ2μ)c.

    (3) Consider o¯ψ1+ψ2μc(e)(a)={μc(b):baψ1aψ2}={(1μ)(b):baψ1aψ2}=1{μc(b):baψ1aψ2}=(ψ1+ψ2_μp(e)(a))c. Hence o¯ψ1+ψ2μc=(ψ1+ψ2_μp)c.

    Proposition 3.2. Let ψ1, and ψ2 be SBrs over U×V, that is ψ1:AP(U×V) and ψ2:AP(U×V) and γF(V). Then the following hold w.r.t the foresets.

    (1) γψ1+ψ2_oγψ1+ψ2_p

    (2) γ¯ψ1+ψ2oγ¯ψ1+ψ2p

    (3) γcψ1+ψ2_o=(γ¯ψ1+ψ2p)c

    (4) γc¯ψ1+ψ2o=(γψ1+ψ2_p)c.

    Proof. The proof is identical to the Proposition 3.1 proof.

    Proposition 3.3. Let ψ1, and ψ2 be SBrs over U×V, that is ψ1:AP(U×V) and ψ2:AP(U×V) and μF(V). Then the following hold w.r.t the aftersets.

    (1) ψ1+ψ2_μo ψ1_μψ2_μ

    (2) o¯ψ1+ψ2μ¯ψ1μ¯ψ2μ

    Proof. (1) Consider, ψ1+ψ2_μo(e)(a)={μ(b):b(aψ1(e)aψ2(e))}({μ(b):baψ1(e)})({μ(b):baψ2(e)})=ψ1_μ(e)(a)ψ2_μ(e)(a). Hence ψ1+ψ2_μoψ1_μψ2_μ.

    (2) Consider, o¯ψ1+ψ2μ(e)(a)={μ(b):b(aψ1(e)aψ2(e))} ({μ(b):baψ1(e)})({μ(b):baψ2(e)})=¯ψ1μ(e)(a)¯ψ2μ(e)(a). Hence o¯ψ1+ψ2μ¯ψ1μ¯ψ2μ.

    Proposition 3.4. Let ψ1 and ψ2 be SBr over U×v that is ψ1:AP(U×V) and ψ2:AP(U×V) and γF(V). Then the following hold w.r.t the foresets.

    (1) γψ1+ψ2_oγψ1_γψ2_

    (2) γ¯ψ1+ψ2oγ¯ψ1γ¯ψ2.

    Proof. The proof is identical to the Proposition 3.3 proof.

    Here's an example that proves the converse isn't true.

    Example 3.2. (Example 3.1 is continued). From Example 3.1, we have the following outcomes.

    ψ1_μ(e1)(a5)=0.1ψ2_μ(e1)(a5)=0¯ψ1μ(e1)(a2)=0.8¯ψ2μ(e1)(a2)=0.4.

    Hence,

    ψ1+ψ2_μo(e1)(a5)=00.1=ψ1_μ(e1)(a5)ψ2_μ(e1)(a5) ando¯ψ1+ψ2μ(e1)(a2)=0.30.4=¯ψ1μ(e1)(a2)ψ2_μ(e1)(a2).

    And

    γψ1_(e1)(b2)=0.2γψ2_(e1)(b2)=0.3γ¯ψ1(e1)(b2)=1γ¯ψ2(e1)(b2)=0.3.

    Hence,

    γψ1+ψ2_o(e1)(b1)=0.20.3=γψ1_(e1)(b1)γψ2_(e1)(b1) andγ¯ψ1+ψ2o(e1)(b2)=00.3=γ¯ψ1(e1)(b2)γψ2_(e1)(b2).

    Proposition 3.5. Let ψ1 and ψ2 be SBrs over U×V, that is ψ1:AP(U×V) and ψ2:AP(U×V). Then the following hold.

    (1) ψ1+ψ2_1o=1 for all eA if aψ1(e) or aψ2(e)

    (2) o¯ψ1+ψ21=1 for all eA if aψ1(e)aψ2(e)

    (3) ψ1+ψ2_0o=0= o¯ψ1+ψ20.

    Proof. (1) Consider, ψ1+ψ2_1o(e)(a)={1(b):baψ1(e)aψ2(e)}={1:baψ1(e)aψ2(e)}=1 because uψ1(e) or aψ2(e).

    (2) Consider, o¯ψ1+ψ21(e)(a)={1(b):baψ1(e)aψ2(e)}={1:baψ1(e)aψ2(e)}=1 because aψ1(e)aψ2(e).

    (3) Straightforward.

    Proposition 3.6. Let ψ1 and ψ2 be SBrs over U×V, that is ψ1:AP(U×V) and ψ2:AP(U×V). Then the following hold.

    (1) 1ψ1+ψ2_o=1 for all eA if ψ1(e)b or ψ2(e)b

    (2) 1¯ψ1+ψ2o=1 for all eA if ψ1(e)bψ2(e)b

    (3) 0ψ1+ψ2_o=0=0¯ψ1+ψ2o.

    Proof. The proof is identical to the Proposition 3.5 proof.

    Proposition 3.7. Let ψ1 and ψ2 be SBr over U×v that is ψ1:AP(U×V) and ψ2:AP(U×V) and μ,μ1,μ2F(V). Then the following properties for ψ1+ψ1_μo, o¯ψ1+ψ1μ hold w.r.t the aftersets.

    (1) If μ1μ2 then ψ1+ψ2_μ1oψ1+ψ2_μ2o,

    (2) If μ1μ2 then o¯ψ1+ψ2μ1 o¯ψ1+ψ2μ2

    (3) ψ1+ψ2_μ1μ2o=ψ1+ψ2_μ1oψ1+ψ2_μ2o

    (4) ψ1+ψ2_μ1μ2oψ1+ψ2_μ1oψ1+ψ2_μ2o

    (5) o¯ψ1+ψ2μ1μ2= o¯ψ1+ψ2μ1 o¯ψ1+ψ2μ2

    (6) o¯ψ1+ψ2μ1μ2 o¯ψ1+ψ2μ1 o¯ψ1+ψ2μ2.

    Proof. (1) Since μ1μ2 so ψ1+ψ2_μ1o(e)(a)={μ1(b):baψ1(e)aψ2(e)}{μ2(b):baψ1(e)aψ2(e)}=ψ1+ψ2_μ2o(e)(a). Hence ψ1+ψ2_μ1oψ1+ψ2_μ2o.

    (2) Since μ1μ2, so o¯ψ1+ψ2μ1(e)(a)={μ1(b):baψ1(e)aψ2(e)}{μ2(b):baψ1(e)aψ2(e)}=o¯ψ1+ψ2μ2(e)(a). Hence o¯ψ1+ψ2μ1μ¯ψ1+ψ2μ2.

    (3) Consider, ψ1+ψ2_μ1μ2o(e)(a)={(μ1μ2)(b):baψ1(e)aψ2(e)}={μ1(b)μ2(b):baψ1(e)aψ2(e)}=({μ1(b):baψ1(e)aψ2(e)})({μ2(b):baψ1(e)aψ2(e)})=(ψ1+ψ2_μ1o(e)(a))(ψ1+ψ2_μ2o(e)(a)). Hence, ψ1+ψ2_μ1μ2o=ψ1+ψ2_μ1oψ1+ψ2_μ2o.

    (4) Since μ1μ1μ2 and μ2μ1μ2. By part (1) ψ1+ψ2_μ1oψ1+ψ2_μ1μ2o and ψ1+ψ2_μ2oψ1+ψ2_μ1μ2oψ1+ψ2_μ1oψ1+ψ2_μ2oψ1+ψ2_μ1μ2o.

    (5) Consider, o¯ψ1+ψ2μ1μ2(e)(a)={(μ1μ2)(b):baψ1(e)aψ2(e)}={μ1(b)μ2(b):baψ1(e)aψ2(e)}={{μ1(b):baψ1(e)aψ2(e)}}{{μ2(b):baψ1(e)aψ2(e)}}={ o¯ψ1+ψ2μ1(e)(a)}{ o¯ψ1+ψ2μ2(e)(a)}. Hence, o¯ψ1+ψ2μ1μ2= o¯ψ1+ψ2μ1 o¯ψ1+ψ2μ2.

    (6) Since μ1μ1μ2 and μ2μ1μ2, we have by part (2) o¯ψ1+ψ2μ1 o¯ψ1+ψ2μ1μ2 and o¯ψ1+ψ2μ2 o¯ψ1+ψ2μ1μ2  o¯ψ1+ψ2μ1 o¯ψ1+ψ2μ2 o¯ψ1+ψ2μ1μ2.

    Proposition 3.8. Let ψ1 and ψ2 be SBr over U×V that is ψ1:AP(U×V) and ψ2:AP(U×V) and γ,γ1,γ2F(U). Then the following properties for γψ1+ψ1_,γ¯ψ1+ψ1 hold w.r.t the foresets

    (1) If γ1γ2 then γ1ψ1+ψ2_oγ1ψ1+ψ2_o,

    (2) If γ1γ2 then γ1¯ψ1+ψ2oγ2¯ψ1+ψ2o

    (3) γ1γ2ψ1+ψ2_o=γ1ψ1+ψ2_oγ2ψ1+ψ2_o

    (4) γ1γ2ψ1+ψ2_oγ1ψ1+ψ2_oγ2ψ1+ψ2_o

    (5) γ1γ2¯ψ1+ψ2o=γ1¯ψ1+ψ2oγ2¯ψ1+ψ2o

    (6) γ1γ2¯ψ1+ψ2oγ1¯ψ1+ψ2oγ2¯ψ1+ψ2o.

    Proof. The proof is identical to the Proposition 3.7 proof.

    The example that follows shows that, typically, the equivalence does not true to parts (4) and (6) of Propositions 3.7 and 3.8.

    Example 3.3. Suppose U={a1,a2,a3,a4} and V={b1,b2,b3,b4} are universes, ψ1 and ψ2 are SBrs over U×V, with the following aftersets:

    a1ψ1(e1)={b1,b2,b4},a1ψ1(e2)={b2},a1ψ2(e1)={b2,b3,b4},a1ψ2(e2)={b1}a2ψ1(e1)={b2},a2ψ1(e2)={b4},a2ψ2(e1)={b2},a2ψ2(e2)={b2,b4}a3ψ1(e1)={b3,b4},a3ψ1(e2)={b1},a3ψ2(e1)={b4},a3ψ2(e2)={b2,b4}a4ψ1(e1)=,a4ψ1(e2)={b2},a4ψ2(e1)={b2,b3},a4ψ2(e2)={b1,b2}.

    And foresets are:

    ψ1(e1)b1={a1},ψ1(e2)b1={a3},ψ2(e1)b1=,ψ2(e2)b1={a1,a4}ψ1(e1)b2={a1,a2},ψ1(e2)b2={a1,a4},ψ2(e1)b2={a1,a2,a4},ψ2(e2)b2={a2,a3,a4}ψ1(e1)b3={a3},ψ1(e2)b3=,ψ2(e1)b3={a1,a4},ψ2(e2)b3=ψ1(e1)b4={a1,a3},ψ1(e2)b4={a2},ψ2(e1)b4={a1,a4},ψ2(e2)b4={a2,a3}.

    Let μ1,μ2,μ1μ2,μ1μ2F(V) be defined as follows:

    b4 b3 b2 b1
    μ1 0 0.3 0.7 0.2
    μ2 0.6 0 0.5 0.3
    μ1μ2 0.6 0.3 0.7 0.3
    μ1μ2 0 0 0.5 0.2

    And γ1,γ2,γ1γ2,γ1γ2F(U) are defined as follows:

    a4 a3 a2 a1
    γ1 0.5 0.3 0.2 0.1
    γ2 0 0.3 0 0.5
    γ1γ2 0.5 0.3 0.2 0.5
    γ1γ2 0 0.3 0 0.1

    Then,

    a1 a2 a3 a4
    ψ1+ψ2_μ1o(e1) 0 0.7 0 0.3
    o¯ψ1+ψ2μ1(e1) 0.7 0.7 0 0
    ψ1+ψ2_μ2o(e1) 0 0.5 0 0
    o¯ψ1+ψ2μ2(e1) 0.6 0.5 0.6 0
    ψ1+ψ2_μ1μ2o(e1) 0.3 0.7 0.3 0.3
    o¯ψ1+ψ2μ1μ2(e1) 0.5 0.5 0 0

    And

    b1 b2 b3 b4
    γ1ψ1+ψ2_o(e1) 0.1 0.1 0.1 0.1
    γ1¯ψ1+ψ2o(e1) 0 0.2 0 0.1
    γ2ψ1+ψ2_o(e1) 0.5 0 0 0
    γ2¯ψ1+ψ2o(e1) 0 0.5 0 0.5
    γ1γ2ψ1+ψ2_o(e1) 0.5 0.2 0.3 0.3
    γ1γ2¯ψ1+ψ2o(e1) 0 0.1 0 0.1

    Hence,

    ψ1+ψ2_μ1o(e1)(a1)ψ1+ψ2_μ2o(e1)(a1)=00.3=ψ1+ψ2_μ1μ2o(e1)(a1)o¯ψ1+ψ2μ1(e1)(a1) o¯ψ1+ψ2μ2(e1)(a1)=0.70.6=0.60.5= o¯ψ1+ψ2μ1μ2(e1)(a1).

    And

    γ1ψ1+ψ2_o(e1)(b2) γ2ψ1+ψ2_o(e1)(b2)=0.10=0.10.2= γ1γ2ψ1+ψ2_o(e1)(b2)γ1¯ψ1+ψ2o(e1)(b2) γ2¯ψ1+ψ2o(e1)(b1)=0.20.5=0.20.1= γ1γ2¯ψ1+ψ2o(e1)(b2).

    The level set or α-cut of the lower approximation ψ1+ψ2_oμ(e) and upper approximation o¯ψ1+ψ2μ(e). rea defined in the following definitions. Definitions 3.1 and 3.2 represent approximations as pairs of FSS. We can describe the lower approximation (ψ1+ψ2_μo(e))α and upper approximation ( o¯ψ1+ψ2μ(e))α, if we associate the FS's α cut.

    Definition 3.3. Let U and V be two non-empty universal sets, and μF(V). Let ψ1 and ψ2 be SBrs over U×V. For any 1α>0, the level set for ψ1+ψ2_μo and o¯ψ1+ψ2μ of μ are defined, respectively as follows:

    (ψ1+ψ2_μo(e))α={aU:ψ1+ψ2_μo(e)(a)α}(o¯ψ1+ψ2μ(e))α={aU:o¯ψ1+ψ2μ(e)(a)α}.

    Definition 3.4. Let U and V be two non-empty universal aets, and γF(U). Let ψ1 and ψ2 be SBr over U×v. For any 1β>0, the level set for γψ1+ψ2_o and γ¯ψ1+ψ2o of μ are defined, respectively as follows:

    (γψ1+ψ2_o(e))α={bV: γψ1+ψ2_o(e)(b)α}(γ¯ψ1+ψ2o(e))α={bV: γ¯ψ1+ψ2o(e)(b)α}.

    Proposition 3.9. Let ψ1 and ψ2 be SBrs over U×V, μF(V) and 1α>0. Then, the following properties hold w.r.t aftersets:

    (1) ψ1+ψ2_(μα)o(e)=(ψ1+ψ2_μo(e))α

    (2) o¯ψ1+ψ2(μα)(e)=(o¯ψ1+ψ2μ(e))α.

    Proof. (1) Let μF(V) and 1α>0. For the crisp set μα, we have

    ψ1+ψ2_(μα)o(e)={aU:aψ1aψ2μα}={aU: μ(b)α baψ1(e)aψ2(e),bV}={aU: {μ(b)α: baψ1(e)aψ2(e),bV}}=(ψ1+ψ2_μo(e))α.

    (2) Let μF(V) and 1α>0. For the crisp set μα, we have

    o¯ψ1+ψ2(μα)(e)={aU:(aψ1aψ2)μα}={aU: μ(b)α baψ1(e)aψ2(e),bV}={aU: {μ(b)α: baψ1(e)aψ2(e),bV}}=( o¯ψ1+ψ2μ(e))α.

    Proposition 3.10. Let ψ1 and ψ2 be SBrs over U×V, γF(U) and 1α>0. Then, the following properties hold w.r.t foresets:

    (1) (γα)ψ1+ψ2_o(e)=(γψ1+ψ2_o(e))α

    (2) (γα)¯ψ1+ψ2o(e)=((γ¯ψ1+ψ2o(e))α.

    Proof. The proof is identical to the Proposition 3.9 proof.

    The notion of optimistic multigranulation roughness of a fuzzy set based on two soft binary relations is generalized in this section to optimistic multigranulation based on multiple SBrs.

    Definition 4.1. Let there be two non-empty finite universes: U and V. θ is a family of SBrs that over U×V. Hence we say a multigranulation generalised soft approximation space (MGGSAS) across two universes is (U,V,θ). The multigranulation generalise soft approximation space (MGGSAS) (U,V,θ) is a generalisation of soft approximation space over dual universes, as is apparent. (U,V,ψ).

    Definition 4.2. Let (U,V,θ) be a MGGSAS over two universes and μ be a FS over V. The OLAP mj=1ψj_μo and OUAP o¯mj=1ψjμ, of FS μ w.r.t aftersets of SBrs (ψj,A)θ are given by

    mj=1ψj_μo(e)(a)={{μ(b):bmj=1aψj(e),bV}, if mj=1aψj(e)0, otherwise.o¯mj=1ψjμ(e)(a)={{μ(b):bmj=1aψj(e),bV}, if mj=1aψj(e)0,otherwise.

    Where aψj(e)={bV:(a,b)ψj(e)}, are aftersets of a for aU and eA.

    Obviously (mj=1ψj_μo,A) and ( o¯mj=1ψjμ,A) are two FSS over U.

    Definition 4.3. Let (U,V,θ) be a MGGSAS over dual universes and γ be a FS over U. The OLAP γmj=1ψj_o and OUAP γ¯mj=1ψjo, of FS γ w.r.t the foresets of SBrs (ψj,A)θ are given by

    γmj=1ψj_o(e)(b)={{γ(a):amj=1ψj(e)(b),aU}, if mj=1ψj(e)(b)0, otherwise.γ¯mj=1ψjo(e)(b)={{γ(a):amj=1ψj(e)(b),aU}, if mj=1ψj(e)(b)0,otherwise.

    Where ψj(e)b={aU:(a,b)ψj(e)} are foresets of b for bV and eA.

    Obviously, ( γmj=1ψj_o,A) and ( γ¯mj=1ψjo,A) are two FSS over V.

    Moreover, mj=1ψj_μo:AF(U),o¯mj=1ψjμ:AF(U) and γmj=1ψj_o:AF(V),γ¯mj=1ψjo:AF(V).

    Proposition 4.1. Let (U,V,θ) be MGGSAS over two universes and μF(V). Then the following properties for mj=1ψj_μo,o¯mj=1ψjμ hold w.r.t the aftersets.

    (1) mj=1ψj_μmj=1ψj_μo

    (2) mj=1¯ψjμo¯mj=1ψjμ.

    Proof. The proof is identical to the Proposition 3.3 proof.

    Proposition 4.2. Let (U,V,θ) be MGGSAS over dual universes and γF(U). Then the following properties for γmj=1ψj_o,γ¯mj=1ψjo hold w.r.t the foresets.

    (1) γmj=1ψj_omi=1γψj_

    (2) γ¯mj=1ψjomj=1γ¯ψj.

    Proof. The proof is identical to the Proposition 3.3 proof.

    Proposition 4.3. Let (U,V,θ) be MGGSAS over dual universes. Then the following hold w.r.t the aftersets.

    (1) mj=1ψj_1o=1eA if aψj(e) for some jm

    (2) o¯mj=1ψj1=1eA if nj=1aψj(e)

    (3) mj=1ψj_0o=0=o¯mj=1ψj0.

    Proof. The proof is identical to theProposition 3.5 proof.

    Proposition 4.4. Let (U,V,θ) be MGGSAS over dual universes. Then the following hold w.r.t the forersets.

    (1) 1mj=1ψj_o=1 for all eA if ψj(e)b for some jm

    (2) 1¯mj=1ψjo=1eA, if nj=1ψj(e)b

    (3) 0mj=1ψj_o=0= 0¯mj=1ψjo.

    Proof. The Proof is identical to the Proposition 3.5 proof.

    Proposition 4.5. Let (U,V,θ) be a MGGSAS over dual universes and μ,μ1,μ2F(V), Then the following properties for mj=1ψj_μo,o¯mj=1ψjμ hold w.r.t the aftersets.

    (1) If μ1μ2 then mj=1ψj_μ1omj=1ψj_μ2o,

    (2) If μ1μ2 then o¯mj=1ψjμ1o¯mj=1ψjμ2

    (3) mj=1ψj_μ1μ2o=mj=1ψj_μ1omj=1ψj_μ2o

    (4) mj=1ψj_μ1μ2omj=1ψj_μ1omj=1ψj_μ2o

    (5) o¯mj=1ψjμ1μ2= o¯mj=1ψjμ1 o¯mj=1ψjμ2

    (6) o¯mj=1ψjμ1μ2 o¯mj=1ψjμ1 o¯mj=1ψjμ2.

    Proof. The proof is identical to the Proposition 3.7 proof.

    Proposition 4.6. Let (U,V,θ) be a MGGSAS over dual universes and γ,γ1,γ2F(U). Then the following properties for γmj=1ψj_o,γ¯mj=1ψjo hold w.r.t the foresets.

    (1) If γ1γ2 then γ1mj=1ψj_oγ2mj=1ψj_o,

    (2) If γ1γ2 then, γ1¯mj=1ψjoγ2¯mj=1ψjo

    (3) γ1γ2mj=1ψj_o=γ1mj=1ψj_o γ2mj=1ψj_o

    (4) γ1γ2mj=1ψj_o γ1mj=1ψj_o γ2mj=1ψj_o

    (5) γ1γ2¯mj=1ψjo=γ1¯mj=1ψjo γ2¯mj=1ψjo

    (6) γ1γ2¯mj=1ψjo γ1¯mj=1ψjo γ2¯mj=1ψjo.

    Proof. The proof is identical to the Proposition 3.7 proof.

    Proposition 4.7. Let (U,V,θ) be a MGGSAS over dual universes and μ1,μ2,μ3,μnF(V), and μnμ3μ2⊇⊆μ1. Then the following properties hold w.r.t the aftersets.

    (1) mj=1ψj_μ1omj=1ψj_μ2omj=1ψj_μ3omj=1ψj_μno

    (2) o¯mj=1ψjμ1 o¯mj=1ψjμ2 o¯mj=1ψjμ3 o¯mj=1ψjμn.

    Proof. Straightforward.

    Proposition 4.8. Let (U,V,θ) be MGGSAS over dual universes and γ1,γ2,γ3,γnF(U), and γnγ3γ2⊆⊇γ1. Then the following properties hold w.r.t the foresets.

    (1) γ1mj=1ψj_oγ2mj=1ψj_oγ3mj=1ψj_oγnmj=1ψj_o

    (2) γ1¯mj=1ψjoγ2¯mj=1ψjoγ3¯mj=1ψjoγn¯mj=1ψjo.

    Proof. Straightforward.

    Definition 4.4. Let (U,V,θ) be a MGGSAS over dual universes, μF(V). For any 1α0, the level set for mj=1ψj_μo and ¯mj=1ψjμ of μ are defined, respectively as follows:

    (mj=1ψj_μo(e))α={aU:mj=1ψj_μo(e)(a)α}(o¯mj=1ψjμ(e))α={aU:o¯mj=1ψjμ(e)(a)α}.

    Definition 4.5. Let (U,V,θ) be MGGSAS over dual universes, μF(U). For any 1α0, the level set for γmj=1ψj_o and γ¯mj=1ψjo of μ are defined, respectively as follows:

    (γmj=1ψj_o(e))α={bV: γmj=1ψj_o(e)(b)α}(γ¯mj=1ψjo(e))α={bV: γ¯mj=1ψjo(e)(a)α}.

    Proposition 4.9. Let (U,V,θ) be MGGSAS over dual universes, μF(V). For any 1α>0. The following properties hold w.r.t aftersets:

    (1) mj=1ψj_(μα)o(e)=(mj=1ψj_μo(e))α

    (2) o¯mj=1ψj(μα)(e)=(o¯mj=1ψjμ(e))α.

    Proof. The proof is identical to the Proposition 3. proof.

    Proposition 4.10. Let (U,V,θ) be MGGSAS over dual universes, μF(V). For any 1α>0. The following properties hold w.r.t foresets:

    (1) (γα)mj=1ψj_o(e)=(γmj=1ψj_o(e))α

    (2) (γα)¯mj=1ψjo(e)=((γ¯mj=1ψjo(e))α.

    Proof. The proof is identical to the Proposition 3.9 proof.

    In this section, we describe the accuracy measurements, rough measure, and example of MGRFS with respect to aftersets and foresets.

    Definition 5.1. Let ψ1 and ψ2 be two SBrs from a non-empty universe U to V and 1αβ0. Then the accuracy measures (or Degree of accuracy) of membership μF(V), with respect to β,α and w.r.t aftersets of ψ1, ψ2 are defined as

    OA(ψ1+ψ2μ(ei))(α,β)=|(ψ1+ψ2_μo(ei))α||( o¯ψ1+ψ2μ(ei))β|for all eiA

    where |.| means the cardinality, where OA, means optimistic accuracy measures. It is obvious that 0OA(ψ1+ψ2μ(ei))(α,β)1. When OA(ψ1+ψ2μ(ei))(α,β)=1, then the fuzzy set μF(V) is definable with respect to aftersets. And the optimistic rough measure are defined as

    OR(ψ1+ψ2μ(ei))(α,β)=1OA(ψ1+ψ2μ(ei))(α,β).

    Definition 5.2. Let ψ1 and ψ2 be two SBrs from a non-empty universe U to V and 1αβ0. The accuracy measures (or Degree of accuracy) of membership γF(U), with respect to β,α and w.r.t foresets of ψ1, ψ2 are defined as

    OA(γψ1+ψ2(ei))(α,β)=|(γψ1+ψ2_o(ei))α||( γ¯ψ1+ψ2o(ei))β|eiA

    where |.| means the cardinality, where OA, means optimistic accuracy measures. It is obvious that 0OA(γψ1+ψ2(ei))(α,β)1. When, OA(γψ1+ψ2(ei))(α,β)=1, then the fuzzy set μF(V) is definable with respect to aforesets. And the optimistic rough measure are defined as

    OR(γψ1+ψ2(ei))(α,β)=1OA(γψ1+ψ2(ei))(α,β).

    Example 5.1. (Example 3.1 is Continued) Let ψ1 and ψ2 be two SBrs from a non empty universal set U to V as given in Example 3.1. Then for μF(V) defined in Example 3.1 and β=0.2 and α=0.4 the α cut sets w.r.t aftersets are as follows respectively.

    \begin{align*} ({\underline{\psi_{1}+\psi_{2}}_{o}}^{\mu}(e_{1}))_{0.4} = & \{a_{1}, a_{6}\}\\ ({\underline{\psi_{1}+\psi_{2}}_{o}}^{\mu}(e_{2}))_{0.4} = & \{a_{5}, a_{6}, a_{7}\}. \end{align*}
    \begin{align*} (^{o}{\overline{\psi_{1}+\psi_{2}}}^{\mu}(e_{1}))_{0.2} = & \{a_{1}, a_{2}, a_{4}, a_{7}\}\\ (^{o}{\overline{\psi_{1}+\psi_{2}}}^{\mu}(e_{2}))_{0.2} = & \{a_{1}, a_{3}, a_{5}, a_{8}\}. \end{align*}

    Then the accuracy measures for \mu\in F(\mathfrak{V}) with respect to \beta = 0.2 and \alpha = 0.4 and w.r.t aftersets of SBrs \psi_{1}, \psi_{2} are calculated as

    \begin{align*} OA({\psi_{1}+\psi_{2}}^{\mu}(e_{1}))_{(\alpha, \beta)} = \frac{|({\underline{\psi_{1}+\psi_{2}}_{o}}^{\mu}(e_{1}))_{0.4}|}{|(^{o}{\overline{\psi_{1}+\psi_{2}}}^{\mu}(e_{1}))_{0.2}|} = & \frac{2}{4} = 0.5, \\ OA({\psi_{1}+\psi_{2}}^{\mu}(e_{2}))_{(\alpha, \beta)} = \frac{|({\underline{\psi_{1}+\psi_{2}}_{o}}^{\mu}(e_{2}))_{0.4}|}{|(^{o}{\overline{\psi_{1}+\psi_{2}}}^{\mu}(e_{2}))_{0.2}|} = & \frac{3}{4} = 0.75. \end{align*}

    Hence, OA({\psi_{1}+\psi_{2}}^{\mu}(e_{i}))_{(\alpha, \beta)} shows the degree to which the FS \mu\in F(\mathfrak{V}) is accurate constrained to the parameters \beta = 0.2 and \alpha = 0.4 for i = 1, 2 w.r.t aftersets. Similarly for \gamma \in F(\mathfrak{U}) defined in Example 3.1, \beta = 0.2, and \alpha = 0.4. Then \alpha cut sets w.r.t foresets are as follows respectively.

    \begin{align*} (^{\gamma}{\underline{\psi_{1}+\psi_{2}}_{o}}(e_{1}))_{0.4} = & \{b_{1}, b_{6}\}\\ (^{\gamma}{\underline{\psi_{1}+\psi_{2}}_{o}}(e_{2}))_{0.4} = & \{b_{2}, b_{5}\}. \end{align*}

    And

    \begin{align*} (^{\gamma}{\overline{\psi_{1}+\psi_{2}}}^{o}(e_{1}))_{0.2} = & \{b_{3}, b_{5}\}\\ (^{\gamma}{\overline{\psi_{1}+\psi_{2}}}^{o}(e_{2}))_{0.2} = & \{b_{1}, b_{2}\}. \end{align*}

    Then the accuracy measures for \gamma\in F(\mathfrak{U}) with respect to \beta = 0.2 and \alpha = 0.4 and w.r.t foresets of SBrs \psi_{1}, \psi_{2} are calculated as

    \begin{align*} OA(^{\gamma}{\psi_{1}+\psi_{2}}(e_{1}))_{(\alpha, \beta)} = \frac{|(^{\gamma}{\underline{\psi_{1}+\psi_{2}}_{o}}(e_{1}))_{0.4}|}{|(^{\gamma}{\overline{\psi_{1}+\psi_{2}}}^{o}(e_{1}))_{0.2}|} = & \frac{2}{2} = 1, \\ OA(^{\gamma}{\psi_{1}+\psi_{2}}(e_{2}))_{(\alpha, \beta)} = \frac{|(^{\gamma}{\underline{\psi_{1}+\psi_{2}}_{o}}(e_{2}))_{0.4}|}{|(^{\gamma}{\overline{\psi_{1}+\psi_{2}}}^{o}(e_{2}))_{0.2}|} = & \frac{2}{2} = 1. \end{align*}

    Hence, OA(^{\gamma}{\psi_{1}+\psi_{2}}(e_{i}))_{(\alpha, \beta)} shows the degree to which the fuzzy set \gamma\in F(\mathfrak{U}) is accurate constrained to the parameters \beta = 0.2 and \alpha = 0.4 for i = 1, 2 w.r.t foresets.

    Proposition 5.1. Let \psi_{1} and \psi_{2} be two SBrs from a non-empty universe \mathfrak{U} to \mathfrak{V}, \ \mu\in F(\mathfrak{V}) and 1\geq \alpha\geq \beta > 0. Then

    (1) OA({\psi_{1}+\psi_{2}}^{\mu}(e_{i}))_{(\alpha, \beta)} increases with the increase in \beta, if \alpha stands fixed.

    (2) OA({\psi_{1}+\psi_{2}}^{\mu}(e_{i}))_{(\alpha, \beta)} decrease with the increase in \alpha, if \beta stands fixed.

    Proof.

    (1) Let \alpha stands fixed and 0 < {\beta}_{1}\leq {\beta}_{2}\leq 1. Then we have |(^{o}\overline{\psi_{1}+\psi_{2}}^{\mu}(e_{i}))_{\beta_{2}}|\leq |(^{o}\overline{\psi_{1}+\psi_{2}}^{\mu}(e_{i}))_{\beta_{1}}|. This implies that \frac{|(\underline{\psi_{1}+\psi_{2}}_{o}^{\mu}(e_{i}))_{\alpha}|}{|(^{o}\overline{\psi_{1}+\psi_{2}}^{\mu}(e_{i}))_{\beta_{1}}|}\leq \frac{|(\underline{\psi_{1}+\psi_{2}}_{o}^{\mu}(e_{i}))_{\alpha}|}{|(^{o}\overline{\psi_{1}+\psi_{2}}^{\mu}(e_{i}))_{\beta_{2}}|}, that is OA({\psi_{1}+\psi_{2}}^{\mu}(e_{i}))_{(\alpha, \beta_{1})}\leq OA({\psi_{1}+\psi_{2}}^{\mu}(e_{i}))_{(\alpha, \beta_{2})}. This shows that OA({\psi_{1}+\psi_{2}}^{\mu}(e_{i}))_{(\alpha, \beta)} increases with the increase in \beta \forall e_{i}\in A.

    (2) Let \beta stands fixed and 0 < {\alpha}_{1}\leq {\alpha}_{2}\leq 1. Then we have |(\underline{\psi_{1}+\psi_{2}}_{o}^{\mu}(e_{i}))_{\alpha_{2}}|\leq |(\overline{\psi_{1}+\psi_{2}}_{o}^{\mu}(e_{i}))_{\alpha_{1}}|. This implies that \frac{|(\underline{\psi_{1}+\psi_{2}}_{o}^{\mu}(e_{i}))_{\alpha_{2}}|}{|(^{o}\overline{\psi_{1}+\psi_{2}}^{\mu}(e_{i}))_{\beta}|}\leq \frac{|(\underline{\psi_{1}+\psi_{2}}_{o}^{\mu}(e_{i}))_{\alpha_{1}}|}{|(^{o}\overline{\psi_{1}+\psi_{2}}^{\mu}(e_{i}))_{\beta}|}, that is OA({\psi_{1}+\psi_{2}}^{\mu}(e_{i}))_{(\alpha_{2}, \beta)}\leq OA({\psi_{1}+\psi_{2}}^{\mu}(e_{i}))_{(\alpha_{1}, \beta)}. This shows that OA({\psi_{1}+\psi_{2}}^{\mu}(e_{i}))_{(\alpha, \beta)} increases with the increase in \alpha \forall e_{i}\in A.

    Proposition 5.2. Let \psi_{1} and \psi_{2} be two SBrs from a non-empty universe \mathfrak{U} to \mathfrak{V}, \gamma\in F(\mathfrak{U}) and 1\geq\alpha \geq \beta > 0. Then

    (1) OA(^{\gamma}{\psi_{1}+\psi_{2}}(e_{i}))_{(\alpha, \beta)} increases with the increase in \beta, if \alpha stands fixed.

    (2) OA(^{\gamma}{\psi_{1}+\psi_{2}}(e_{i}))_{(\alpha, \beta)} decrease with the increase \alpha, if \beta stands fixed.

    Proof. The proof is identical to the Proposition 5.1 proof.

    Proposition 5.3. Let \psi_{1} and \psi_{2} be SBrs from a non-empty universe \mathfrak{U} to \mathfrak{V}, 1\geq \alpha \geq \beta > 0 and \mu, \mu\in F(\mathfrak{V}), with \mu\leq \mu. Then the following properties hold w.r.t aftersets.

    (1) OA({\psi_{1}+\psi_{2}}^{\mu}(e_{i}))_{(\alpha, \beta)}\leq OA({\psi_{1}+\psi_{2}}^{\mu}(e_{i}))_{(\alpha, \beta)}, whenever (^{o}\overline{\psi_{1}+\psi_{2}}_{o}^{\mu})_{\beta} = (^{o}\overline{\psi_{1}+\psi_{2}}^{\mu})_{\beta}.

    (2) OA({\psi_{1}+\psi_{2}}^{\mu}(e_{i}))_{(\alpha, \beta)}\geq OA({\psi_{1}+\psi_{2}}^{\mu}(e_{i}))_{(\alpha, \beta)}, whenever (\underline{\psi_{1}+\psi_{2}}_{o}^{\mu})_{\alpha} = (\underline{\psi_{1}+\psi_{2}}_{o}^{\mu})_{\alpha}.

    Proof.

    (1) Let 1\geq \alpha \geq \beta > 0 and \mu, \mu\in F(\mathfrak{V}) be such that \mu\leq \mu. Then \underline{\psi_{1}+\psi_{2}}_{o}^{\mu}(e_{i})\leq\underline{\psi_{1}+\psi_{2}}_{o}^{\mu}(e_{i})), that is |(\underline{\psi_{1}+\psi_{2}}_{o}^{\mu}(e_{i}))_{\alpha}|\leq |(\underline{\psi_{1}+\psi_{2}}_{o}^{\mu}(e_{i}))_{\alpha}|. This implies that \frac{|(\underline{\psi_{1}+\psi_{2}}_{o}^{\mu}(e_{i}))_{\alpha}|}{|(^{o}\overline{\psi_{1}+\psi_{2}}^{\mu}(e_{i}))_{\beta}|}\leq \frac{|(\underline{\psi_{1}+\psi_{2}}_{o}^{\mu}(e_{i}))_{\alpha}|}{|(^{o}\overline{\psi_{1}+\psi_{2}}^{\mu}(e_{i}))_{\beta}|}. Hence OA(\psi_{1}+\psi_{2}^{\mu}(e_{i}))_{(\alpha, \beta)}\leq OA(\psi_{1}+\psi_{2}^{\mu}(e_{i}))_{(\alpha, \beta)} \forall e_{i}\in A.

    (2) Let 1\geq \alpha \geq \beta > 0 and \mu, \mu\in F(\mathfrak{V}) be such that \mu\leq \mu. Then ^{o}\overline{\psi_{1}+\psi_{2}}^{\mu}(e_{i})\leq\ ^{o}\overline{\psi_{1}+\psi_{2}}^{\mu}(e_{i}), that is |(^{o}\overline{\psi_{1}+\psi_{2}}^{\mu}(e_{i}))_{\beta}|\leq |(^{o}\overline{\psi_{1}+\psi_{2}}^{\mu}(e_{i}))_{\beta}|. This implies that \frac{|(\underline{\psi_{1}+\psi_{2}}_{o}^{\mu}(e_{i}))_{\alpha}|}{|(^{o}\overline{\psi_{1}+\psi_{2}}^{\mu}(e_{i}))_{\beta}|}\geq \frac{|(\underline{\psi_{1}+\psi_{2}}_{o}^{\mu}(e_{i}))_{\alpha}|}{|(^{o}\overline{\psi_{1}+\psi_{2}}^{\mu}(e_{i}))_{\beta}|}. Hence OA(\psi_{1}+\psi_{2}^{\mu}(e_{i}))_{(\alpha, \beta)}\geq OA(\psi_{1}+\psi_{2}^{\mu}(e_{i}))_{(\alpha, \beta)} \forall e_{i}\in A.

    Proposition 5.4. Let \psi_{1} and \psi_{2} be SBrs from a non-empty universe U to \mathfrak{V}, 1\geq \alpha \geq \beta > 0 and \gamma, \delta \in F(\mathfrak{U}), with \delta\geq \gamma. Then the following properties hold w.r.t foresets.

    (1) OA(\gamma{\psi_{1}+\psi_{2}}(e_{i}))_{(\alpha, \beta)}\leq OA({\psi_{1}+\psi_{2}}^{\mu}(e_{i}))_{(\alpha, \beta)}, whenever (^{o}\overline{\psi_{1}+\psi_{2}}_{o}^{\mu})_{\beta} = (^{o}\overline{\psi_{1}+\psi_{2}}^{\mu})_{\beta}.

    (2) OA({\psi_{1}+\psi_{2}}^{\mu}(e_{i}))_{(\alpha, \beta)}\geq OA({\psi_{1}+\psi_{2}}^{\mu}(e_{i}))_{(\alpha, \beta)}, whenever (\underline{\psi_{1}+\psi_{2}}_{o}^{\mu})_{\alpha} = (\underline{\psi_{1}+\psi_{2}}_{o}^{\mu})_{\alpha}.

    Proof. The proof is identical to the Proposition 5.3 proof.

    Firstly soft sets were applied in DM problems by Maji et al. [31], but this DM deal with problems based on a crisp SS to cope with an FSS based on DM problems. FSS can solve the problems of decision-making in real life. It deals with uncertainties and the vagueness of data. There are several techniques of using fuzzy soft sets to solve decision-making challenges. Roy and Maji [40] presented a novel method of object recognition from inaccurate multi-observer data. The limitations in Roy and Maji [40] are overcome by Feng et al.[17], Hou [18] made use of grey relational analysis to take care of the issues of problems in making decisions. The DM methods by multi-SBr are proposed in this section following the FSS theory. The majority of FSS-based approaches to DM have choice values of " C_{k} ", therefore it makes sense to choose the objects with the highest choice value as the best option.

    There are two closest approximations to universes: the lower and upper approximations. As a result, we are able to determine the two values \underline{\sum_{j = 1}^{n}\psi_{j}}^{\mu}(e_{i})(a_{l}) and \overline{\sum_{j = 1}^{n}\psi_{j}}^{\mu}(e_{i})(a_{l}) that are most closely related to the afterset to the decision alternative a_{i}\in \mathfrak{U} using the FSLAP and FSUAP of an FS \mu \in F(\mathfrak{V}). To address DM issues based on RFSS, we therefore redefine the choice value C_{l} for the decision alternative a_{l} of the universe \mathfrak{U} .

    For the proposed model, we present two algorithms, each of which consists of the actions outlined below.

    Algorithm 1: The algorithm for solving a DM problem using aftersets is as follows.

    {\bf Step\ 1:} Compute the lower MGFSS approximation \underline{\sum_{j = 1}^{n}\psi_{j}}^{\mu} and upper MGFSS approximation \overline{\sum_{j = 1}^{n}\psi_{j}}^{\mu}, of fuzzy set \mu w.r.t aftersets.

    {\bf Step\ 2:} Compute the sum of lower MGFSS approximation \sum_{i = 1}^{n}(\underline{\sum_{j = 1}^{n}\psi_{j}}^{\mu}(e_{i})(a_{l})) and the sum of upper MGFSS approximation \sum_{i = 1}^{n}(\overline{\sum_{j = 1}^{n}\psi_{j}}^{ \mu}(e_{i})(a_{l})), corresponding to j w.r.t aftersets.

    {\bf Step\ 3:} Compute the choice value C_{l} = \sum_{i = 1}^{ n}(\underline{\sum_{j = 1}^{ n}\psi_{j}}^{\mu}(e_{i})(a_{l}))+\sum_{i = 1}^{ n}(\overline{\sum_{j = 1}^{ n}\psi_{j}}^{\mu}(e_{i})(a_{l})), \ a_{l}\in \mathfrak{U} w.r.t aftersets.

    {\bf Step\ 4:} The preferred decision isa

    a_{k}\in \mathfrak{U} if C_{k} = max_{l = 1}^{|\mathfrak{U}|}C_{l}.

    {\bf Step\ 5:} The decision that is the worst is a_{k}\in U if C_{k} = min_{l = 1}^{|\mathfrak{U}|}C_{l}.

    {\bf Step\ 6:} If k has more than one valve, then any one of a_{k} may be chosen.

    Algorithm 2: The following is an algorithm for approaching a DM problem w.r.t foresets.

    {\bf Step\ 1:} Compute the lower MGFSS approximation ^{\gamma}\underline{\sum_{j = 1}^{n}\psi_{j}} and upper MGFSS approximation ^{\gamma}\overline{\sum_{j = 1}^{n}\psi_{j}}, of fuzzy set \gamma with respect to foresets.

    {\bf Step\ 2:} Compute the sum of lower MGFSS approximations \sum_{i = 1}^{n}(^{\gamma}\underline{\sum_{j = 1}^{n}\psi_{j}}(e_{i})(b_{l})) and the sum of upper MGFSS approximation \sum_{i = 1}^{n}(^{\gamma}\overline{\sum_{j = 1}^{n}\psi_{j}}(e_{i})(b_{l})), corresponding to j w.r.t foresets.

    {\bf Step\ 3:} Compute the choice value C_{l} = \sum_{i = 1}^{n}(^{\gamma}\underline{\sum_{j = 1}^{n}\psi_{j}}(e_{i})(b_{l}))+\sum_{i = 1}^{n}(^{\gamma}\overline{\sum_{j = 1}^{n}\psi_{j}}(e_{i})(b_{l})), \ b_{l}\in \mathfrak{V} w.r.t foresets.

    {\bf Step\ 4:} The preferred decision is b_{k}\in \mathfrak{V} if C_{k} = max_{l = 1}^{|\mathfrak{V}|}C_{l}.

    {\bf Step\ 5:} The decision that is the worst is b_{k}\in \mathfrak{V} if C_{k} = min_{l = 1}^{|\mathfrak{V}|}C_{l}.

    {\bf Step\ 6:} If k has more than one value, then any one of b_{k} may be chosen.

    An application of the decision-making approach

    Example 6.1. (Example 3.1 Continued) Consider the SBrs of Example 3.1 again where a franchise \mathfrak{XYZ} wants to pick a best player foreign allrounder for their team from Platinum and Diamond categories.

    \begin{array}{c} Define\ \mu:\mathfrak{V}\rightarrow [0, 1],\ which\ represent\ the\ preference\ of\ the\ player\ given\ by\ franchise\ \mathfrak{XYZ}\ such\ that\\ \mu = \frac{ 0.9}{b_{1}}+\frac{0.8}{b_{2}}+\frac{0.4}{b_{3}}+\frac{0.3}{b_{5}}+\frac{0.1}{b_{6}}\frac{1}{b_{7}}. \\ And\\ Define\ \gamma:U\rightarrow [0, 1],\ which\ represent\ the\ preference\ of\ the\ player\ given\ by\ franchise\ \mathfrak{XYZ}\ such\ that\\ \gamma = \frac{0.2}{a_{1}}+\frac{1}{a_{2}}+\frac{0.5}{a_{3}}+\frac{0.9}{a_{4}}+\frac{0.6}{a_{5}}+\frac{0.7}{a_{6}}+\frac{0.1}{a_{7}}+\frac{0.3}{a_{8}}. \end{array}

    Consider Tables 1 and 2 after applying the above algorithms.

    Table 1.  The optimistic result of the decision algorithm with respect to aftersets.
    \underline{\psi_{1}+\psi_{2}}_{o}^{\mu}(e_{1}) \underline{\psi_{1}+\psi_{2}}_{o}^{\mu}(e_{2}) ^{o}\overline{\psi_{1}+\psi_{2}}^{\mu}(e_{1}) ^{o}\overline{\psi_{1}+\psi_{2}}^{\mu}(e_{2}) Choice value C_{k}
    a_{1} 0.4 0 0.8 0.4 1.6
    a_{2} 0.3 0 0.3 0 0.6
    a_{3} 0 0.1 0 0.9 1.0
    a_{4} 0.1 0 0.4 0 0.5
    a_{5} 0 0.8 0 0.8 1.6
    a_{6} 1 0.6 0 0 1.6
    a_{7} 0 0.1 1 0.1 1.2
    a_{8} 0.3 0.9 0 0.9 2.1

     | Show Table
    DownLoad: CSV
    Table 2.  The optimistic result of the decision algorithm with respect to foresets.
    ^{\gamma}\underline{\psi_{1}+\psi_{2}}_{o}(e_{1}) ^{\gamma}\underline{\psi_{1}+\psi_{2}}_{o}(e_{2}) ^{\gamma}\overline{\psi_{1}+\psi_{2}}^{p}(e_{1}) ^{\gamma}\overline{\psi_{1}+\psi_{2}}^{p}(e_{2}) Choice value C^{'}_{k}
    b_{1} 0.6 0.3 0 0.5 1.6
    b_{2} 0.2 0.6 0 0.9 1.7
    b_{3} 0.1 0.1 0.9 0 1.1
    b_{4} 0.1 0.2 0 0 0.3
    b_{5} 0.3 0.7 1 0 2
    b_{6} 0.6 0.1 0 0.1 0.8
    b_{7} 0.1 0.3 0.1 0 0.5

     | Show Table
    DownLoad: CSV

    Here the choice value C_{l} = \sum_{j = 1}^{n}(\underline{\sum_{i = 1}^{n}\psi_{i}}^{\mu}(e_{j})(a_{l}))+\sum_{j = 1}^{n}(\overline{\sum_{i = 1}^{n }\psi_{i}}^{\mu}(e_{j})(a_{l})), \ a_{l}\in \mathfrak{U} w.r.t aftersets and C^{'}_{l} = \sum_{j = 1}^{n }(^{\gamma}\underline{\sum_{i = 1}^{n }\psi_{i}}(e_{j})(b_{l}))+\sum_{j = 1}^{ n}(^{\gamma}\overline{\sum_{i = 1}^{ n}\psi_{i}}(e_{j})(b_{l})), \ b_{l}\in \mathfrak{V} w.r.t foresets.

    From Table 1 it is clear that the maximum choice-value C_{k} = 2.1 = C_{8} scored by the player a_{8} and the decision is in the favor of selecting the player a_{8}. Moreover, player a_{4} is ignored. Hence franchise \mathfrak{XYZ} will choose the player a_{8} from the platinum category w.r.t aftersets. Similarly from Table 2 the maximum choice-value C^{'}_{k} = 2 = C^{'}_{5} scored by the player b_{5} and the decision is in the favor of selecting the player b_{5}. Moreove, player b_{4} is ignored. Hence franchise XYZ will choose the player b_{5} from the diamond category w.r.t aforesets.

    In this section, we will analyze the effectiveness of our method comparatively. To deal with incompleteness and vagueness, an MGRS model is proposed in terms of multiple equivalence relations by Qian et al. [39], which is better than RS. Xu et al. [57] fostered the model of MGFRS by unifying MGRS theory and FSs. However, in most of daily life, the satiation decision-making process might depend on the possibility of two or more universes. Sun and Ma [47] initiated the notion of MGRS over two universes with good modeling capabilities to overcome this satiation. To make the equivalence relation more flexible, the conditions had To be relaxed, Shabir et al. [43] presented the MGRS of a crisp set based on soft binary relations and its application in data classification, and Ayub et al. [4] introduced SMGRS which is the particular case of MGRS [43]. An FS is better than a crisp set to cope the uncertainty. Here, we have a novel hybrid model of OPMGFRS by using multi-soft binary relations. Our suggested model is more capable of capturing the uncertainty because of its parametrization of binary relations in a multigarnulation environment. Moreover, in our proposed OMGFRS model, we replace an FS with a crisp set. A crisp set can not address the uncertainty and vagueness in our actual salutation. The main advantage of this model is to approximate a fuzzy set in universe \mathfrak{U} (\mathfrak{V}) an anther universe \mathfrak{V} (\mathfrak{U}), and we acquire a fuzzy with respect to each parameter which is a fuzzy soft set over \mathfrak{V} (\mathfrak{U}). Hence the fuzzy soft set is more capable than the crisp and fuzzy sets of addressing the uncertainty.

    The MGR of an FS based on SBr is investigated in this article over dual universes. We first defined the roughness of an FS with respect to the aftersets and foresets of two SBr, and then we used the aftersets and foresets to approximate an FS \mu\in F(\mathfrak{V}) in universe \mathfrak{U}, and an FS \gamma\in F(\mathfrak{U}) in universe \mathfrak{V}. From which, we obtained two FSS of \mathfrak{U} and \mathfrak{V}, with respect to aftersets and foresets. We also look into the essential properties of the MGR of an FS. Then we generalized this definition to MGRFS based on SBr. For this proposed multigranulation roughness, we also define the accuracy and roughness measures. Moreover, we provided two decision-making algorithms with respect to aftersest and forests, as well as an example of use in decision-making problems. The vital feature of this method is that it allows us to approximate a fuzzy set of the universe in another universe, and we can section an object from a universe and another universe's information based. Future research will concentrate on how the proposed method might be used to solve a wider variety of selection problems in different fields like medical science, social science, and management science.

    The authors declare no conflict of interest.



    [1] F. Ferraty, P. Vieu, Nonparametric models for functional data, with applications in regression, time series prediction and curves discrimination, J. Nonparametr. Stat., 16 (2004), 111–125. https://doi.org/10.1080/10485250310001622686 doi: 10.1080/10485250310001622686
    [2] F. Ferraty, P. Vieu, Nonparametric functional data analysis theory and practice, New York: Springer, 2006.
    [3] G. Boente, R. Fraiman, Robust nonparametric regression estimation, J. Multivariate Anal., 29 (1989), 180–198. https://doi.org/10.1016/0047-259X(89)90023-7 doi: 10.1016/0047-259X(89)90023-7
    [4] P. J. Huber, Robust estimation of a location parameter: annals mathematics statistics, IEEE T. Signal Proces., 56 (1964), 2356–2356.
    [5] G. Collomb, W. Hardle, Strong uniform convergence rates in robust nonparametric time series analysis and prediction: Kernel regression estimation from dependent observations, Stoch. Proc. Appl., 23 (1986), 77–89. https://doi.org/10.1016/0304-4149(86)90017-7 doi: 10.1016/0304-4149(86)90017-7
    [6] N. Laïb, E. Ould-Said, A robust nonparametric estimation of the autoregression function under ergodic hypothesis, Canad. J. Stat., 28 (2000), 817–828. https://doi.org/10.2307/3315918 doi: 10.2307/3315918
    [7] F. Ruggeri, Nonparametric bayesian robustness, Chil. J. Stat., 1 (2010), 51–68.
    [8] N. Azzedine, A. Laksaci, E. Ould-Said, On the robust nonparametric regression estimation for functional regressor, Stat. Probabil. Lett., 78 (2008), 3216–3221. https://doi.org/10.1016/j.spl.2008.06.018 doi: 10.1016/j.spl.2008.06.018
    [9] C. Crambes, L. Delsol, A. Laksaci, Robust nonparametric estimation for functional data, J. Nonparametr. Stat., 20 (2008), 573–598. https://doi.org/10.1080/10485250802331524 doi: 10.1080/10485250802331524
    [10] G. Boente, A. Vahnovan, Strong convergence of robust equivariant nonparametric functional regression estimators, Stat. Probabil. Lett., 100 (2015), 1–11. https://doi.org/10.1016/j.spl.2015.01.028 doi: 10.1016/j.spl.2015.01.028
    [11] N. Laïb, D. Louani, Nonparametric kernel regression estimation for functional stationary ergodic data: Asymptotic properties, J. Multivariate Anal., 101 (2010), 2266–2281. https://doi.org/10.1016/j.jmva.2010.05.010 doi: 10.1016/j.jmva.2010.05.010
    [12] N. Laïb, D. Louani, Rates of strong consistencies of the regression function estimator for functional stationary ergodic data, J. Stat. Plann. Infer., 141 (2011), 359–372. https://doi.org/10.1016/j.jspi.2010.06.009 doi: 10.1016/j.jspi.2010.06.009
    [13] A. Gheriballah, A. Laksaci, S. Sekkal, Nonparametric Mregression for functional ergodic data, Stat. Probabil. Lett., 83 (2013), 902–908. https://doi.org/10.1016/j.spl.2012.12.004 doi: 10.1016/j.spl.2012.12.004
    [14] F. Benziadi, A. Gheriballah, A. Laksaci, Asymptotic normality of kernel estimator of \psi -regression functional ergodic data, New Trends Math. Sci., 1 (2016), 268–282.
    [15] F. Benziadi, A. Laksaci, F. Tebboune, Recursive kernel estimate of the conditional quantile for functional ergodic data, Commun. Stat. Theor. M., 45 (2016), 3097–3113. https://doi.org/10.1080/03610926.2014.901364 doi: 10.1080/03610926.2014.901364
    [16] D. Bosq, Linear processes in function spaces: theory and applications, Berlin: Springer, 2000.
    [17] J. O. Ramsay, B. W. Silverman, Applied functional data analysis: methods and case studies, New York: Springer, 2002.
    [18] G. Geenens, Curse of dimensionality and related issues in nonparametric functional regression, Stat. Surv., 5 (2011), 30–43. https://doi.org/10.1214/09-SS049 doi: 10.1214/09-SS049
    [19] I. M. Almanjahie, M. K. Attouch, O. Fetitah, H. Louhab, Robust kernel regression estimator of the scale parameter for functional ergodic data with applications, Chil. J. Stat., 11 (2020), 73–93.
    [20] I. M. Almanjahie, K. Aissiri, A. Laksaci, Z. Chiker Elmezouar, The k nearest neighbors smoothing of the relative-error regression with functional regressor, Commun. Stat. Theor. M., 51 (2020), 4196–4209. https://doi.org/10.1080/03610926.2020.1811870 doi: 10.1080/03610926.2020.1811870
    [21] W. Bouabsa, Nonparametric relative error estimation via functional regressor by the k Nearest Neighbors smoothing under truncation random data, AAM, 16 (2021), 97–116.
    [22] F. Burba, F. Ferraty, P. Vieu, k-Nearest neighbor method in functional nonparametric regression, J. Nonparametr. Stat., 21 (2009), 453–469. https://doi.org/10.1080/10485250802668909 doi: 10.1080/10485250802668909
    [23] M. Attouch, W. Bouabsa, The k-nearest neighbors estimation of the conditional mode for functional data, Rev. Roumaine Math. Pures Appl., 58 (2013), 393–415.
    [24] M. Attouch, W. Bouabsa, Z. Chiker el mozoaur, The k-nearest neighbors estimation of the conditional mode for functional data under dependency, Int. J. Stat. Econ., 19 (2018), 48–60.
    [25] M. Attouch, F. Belabed, The k nearest neighbors estimation of the conditional hazard function for functional data, REVSTAT Stat. J., 12 (2014), 273–297. https://doi.org/10.57805/revstat.v12i3.154 doi: 10.57805/revstat.v12i3.154
    [26] L. Z. Kara, A. Laksaci, M. Rachdi, P. Vieu, Data-driven kNN estimation in nonparametric functional data analysis, J. Multivariate Anal., 153 (2017), 176–188. https://doi.org/10.1016/j.jmva.2016.09.016 doi: 10.1016/j.jmva.2016.09.016
    [27] I. M. Almanjahie, O. Fetitah, M. Attouch, H. Louhab, Asymptotic normality of the robust equivariant estimator for functional nonparametric models, Math. Probl. Eng., 2022 (2022), 8989037. https://doi.org/10.1155/2022/8989037 doi: 10.1155/2022/8989037
    [28] F. Ferraty, A. Mas, P. Vieu, Nonparametric regression on functional data: inference and practical aspect, Aust. New Zeal. J. Stat., 49 (2007), 267–286. https://doi.org/10.1111/j.1467-842X.2007.00480.x doi: 10.1111/j.1467-842X.2007.00480.x
    [29] P. Gaenssler, J. Strobel, W. Stute, On central limit theorems for martingale triangular arrays, Acta Math. Acad. Sci. H., 31 (1978), 205–216.
    [30] F. Ferraty, P. Vieu, Additive prediction and boosting for functional data, Comput. Stat. Data Anal., 53 (2009), 1400–1413. https://doi.org/10.1016/j.csda.2008.11.023 doi: 10.1016/j.csda.2008.11.023
    [31] P. Hall, C. Heyde, Martingale limit theory and its application, New York: Academic Press, 1980.
    [32] M. Attouch, T. Benchikh, Asymptotic distribution of robust k-nearest neighbour estimator for functional nonparametric models, Mat. Vestn., 64 (2012), 275–285.
    [33] C. Azevedo, P. E. Oliveira, On the kernel estimation of a multivariate distribution function under positive dependence, Chil. J. Stat., 2 (2011), 99–113.
  • This article has been cited by:

    1. Sivajiganesan Sivasankar, Ramalingam Udhayakumar, Arumugam Deiveegan, Reny George, Ahmed M. Hassan, Sina Etemad, Approximate controllability of Hilfer fractional neutral stochastic systems of the Sobolev type by using almost sectorial operators, 2023, 8, 2473-6988, 30374, 10.3934/math.20231551
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1303) PDF downloads(60) Cited by(0)

Figures and Tables

Figures(14)  /  Tables(1)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog