Processing math: 44%
Research article

The discernibility approach for multi-granulation reduction of generalized neighborhood decision information systems

  • Received: 29 September 2024 Revised: 16 November 2024 Accepted: 29 November 2024 Published: 19 December 2024
  • MSC : 68T30, 68T37

  • Attribute reduction of a decision information system (DIS) using multi-granulation rough sets is one of the important applications of granular computing. Constructing discernibility matrices by rough sets to get attribute reducts of a DIS is an important reduction method. By analyzing the commonalities between the multi-granulation reduction structure of decision multi-granulation spaces and that of incomplete DISs based on discernibility tool, this paper explored a general model for the multi-granulation reduction of DISs by the discernibility technique. First, the definition of the generalized neighborhood decision information system (GNDIS) was presented. Second, knowledge reduction of GNDISs by multi-granulation rough sets was discussed, and discernibility matrices and discernibility functions were constructed to characterize multi-granulation reduction structures of GNDISs. Third, the multi-granulation reduction structures of decision multi-granulation spaces and incomplete DISs were characterized by the reduction theory of GNDISs based on discernibility. Then, the multi-granulation reduction of GNDISs by the discernibility tool provided a theoretical foundation for designing algorithms of multi-granulation reduction of DISs.

    Citation: Yanlan Zhang, Changqing Li. The discernibility approach for multi-granulation reduction of generalized neighborhood decision information systems[J]. AIMS Mathematics, 2024, 9(12): 35471-35502. doi: 10.3934/math.20241684

    Related Papers:

    [1] Sisi Xia, Lin Chen, Haoran Yang . A soft set based approach for the decision-making problem with heterogeneous information. AIMS Mathematics, 2022, 7(12): 20420-20440. doi: 10.3934/math.20221119
    [2] Tareq M. Al-shami, Rodyna A. Hosny, Abdelwaheb Mhemdi, M. Hosny . Cardinality rough neighborhoods with applications. AIMS Mathematics, 2024, 9(11): 31366-31392. doi: 10.3934/math.20241511
    [3] Shoubin Sun, Lingqiang Li, Kai Hu, A. A. Ramadan . L-fuzzy upper approximation operators associated with L-generalized fuzzy remote neighborhood systems of L-fuzzy points. AIMS Mathematics, 2020, 5(6): 5639-5653. doi: 10.3934/math.2020360
    [4] D. Jeni Seles Martina, G. Deepa . Some algebraic properties on rough neutrosophic matrix and its application to multi-criteria decision-making. AIMS Mathematics, 2023, 8(10): 24132-24152. doi: 10.3934/math.20231230
    [5] Rizwan Gul, Muhammad Shabir, Tareq M. Al-shami, M. Hosny . A Comprehensive study on (α,β)-multi-granulation bipolar fuzzy rough sets under bipolar fuzzy preference relation. AIMS Mathematics, 2023, 8(11): 25888-25921. doi: 10.3934/math.20231320
    [6] D. I. Taher, R. Abu-Gdairi, M. K. El-Bably, M. A. El-Gayar . Decision-making in diagnosing heart failure problems using basic rough sets. AIMS Mathematics, 2024, 9(8): 21816-21847. doi: 10.3934/math.20241061
    [7] Tareq M. Al-shami, M. Hosny . Generalized approximation spaces generation from Ij-neighborhoods and ideals with application to Chikungunya disease. AIMS Mathematics, 2024, 9(4): 10050-10077. doi: 10.3934/math.2024492
    [8] R. Abu-Gdairi, A. A. El-Atik, M. K. El-Bably . Topological visualization and graph analysis of rough sets via neighborhoods: A medical application using human heart data. AIMS Mathematics, 2023, 8(11): 26945-26967. doi: 10.3934/math.20231379
    [9] Liying Yang, Jinjin Li, Yiliang Li, Qifang Li . Sub-base local reduct in a family of sub-bases. AIMS Mathematics, 2022, 7(7): 13271-13277. doi: 10.3934/math.2022732
    [10] Ashraf S. Nawar, Mostafa A. El-Gayar, Mostafa K. El-Bably, Rodyna A. Hosny . θβ-ideal approximation spaces and their applications. AIMS Mathematics, 2022, 7(2): 2479-2497. doi: 10.3934/math.2022139
  • Attribute reduction of a decision information system (DIS) using multi-granulation rough sets is one of the important applications of granular computing. Constructing discernibility matrices by rough sets to get attribute reducts of a DIS is an important reduction method. By analyzing the commonalities between the multi-granulation reduction structure of decision multi-granulation spaces and that of incomplete DISs based on discernibility tool, this paper explored a general model for the multi-granulation reduction of DISs by the discernibility technique. First, the definition of the generalized neighborhood decision information system (GNDIS) was presented. Second, knowledge reduction of GNDISs by multi-granulation rough sets was discussed, and discernibility matrices and discernibility functions were constructed to characterize multi-granulation reduction structures of GNDISs. Third, the multi-granulation reduction structures of decision multi-granulation spaces and incomplete DISs were characterized by the reduction theory of GNDISs based on discernibility. Then, the multi-granulation reduction of GNDISs by the discernibility tool provided a theoretical foundation for designing algorithms of multi-granulation reduction of DISs.



    With the development of information technology, the datasets with lots of features have been collected in many application fields. However, datasets usually contain many redundant features, which may affect the classification ability of datasets and increase the complexity of learning algorithms. Feature selection is a data preprocessing technology which selects a subset from the original feature set to improve the performance of learning algorithms. So far, feature selection has been applied to rule extraction [1], decision-making [4,24], and data mining [30].

    Rough set theory [12] is a typical granular computing model and a meaningful mathematical tool for feature selection, which is also called attribute reduction in rough set theory. In rough set theory, the datasets are represented as information systems. Due to the diversity of the datasets, different types of information systems are discussed, whose attribute reduction structures are explored by different rough set models. For example, the attribute reduction of a complete decision information system (CDIS) was investigated based on the classical Pawlak rough set model, which is defined by equivalence (indiscernibility) relations or partitions[13,16,31]. The attribute reduction of an incomplete decision information system (IDIS) was discussed by relation rough sets[3,9,17,23]. The neighborhood rough set model, defined on neighborhood granularies, was used to attribute reduction of neighborhood DISs[5,6,33]. The covering rough sets defined on coverings were utilized for reducing covering DISs [2,22,25,26]. The rough set models mentioned above are constructed by a single granular knowledge. However, in real-world applications, more and more datasets should be described via multiple granular structures. Qian et al. proposed the concepts of multi-granulation rough sets in CDISs and discussed attribute reduction of CDISs based on multi-granulation rough sets [14,18]. Kong et al. explored multi-granulation reduction of information systems [8]. Attribute reduction of IDISs based on multi-granulation rough sets was explored in [15]. Zhang et al. defined a generalized multi-granulation fuzzy neighborhood rough set model and discussed a feature selection method by the model [34].

    The technique of constructing discernibility matrices and discernibility functions, proposed by Skowron and Rauszer [20] and Skowron [19], is an important attribute reduction method. Yao and Zhao defined a minimal family of discernibility sets based on the family of discernibility sets to compute the reducts of CDISs [29]. Zhao et al. constructed the relative discernibility matrix of a CDIS to get relative reducts[35]. Jiang and Yu proposed a compactness discernibility information tree to obtain the minimal attribute reduction of a CDIS [7]. Ma et al. introduced a compressed binary discernibility matrix and designed an incremental attribute reduction algorithm for getting an attribute reduction set of a dynamic CDIS [10]. A binary discernibility matrix was designed to get attribute relative reducts of an IDIS [11]. Chen et al. [2], Wang et al. [22] and Yang et al. [25] constructed discernibility matrices to obtain the reducts of covering DISs. The discernibility techniques are also used to achieve the knowledge reduction of information systems based on multi-granulation rough sets. Tan et al. constructed discenibility matrices and discernibility functions to calculate the attribute reducts of a decision multi-granulation space (DMS) [21] and verified the effectiveness of the reduction methods by numerical experiments. However, the optimistic lower reducts are not calculated by discenibility matrices in [21]. Zhang et al. explored the attribute reduction of an IDIS by the discernibility approach in multi-granulation rough set theory [32] and presented numerical experiments to show the feasibility and effectiveness of the algorithms to get reducts. However, the attribute reduction of IDISs based on optimistic multi-granulation rough sets is not considered in [32].

    The purpose of this paper is to analyze and compare the multi-granulation reduction theory of DMSs [21] and IDISs [32] from discernibility, and to present a general model for the multi-granulation reduction of DISs by the discernibility technique, which can provide a theoretical basis for the multi-granulation reduction of DISs based on discernibility. The notation of a GNDIS is introduced in this paper, the multi-granulation reductions of GNDISs based on the multi-granulation rough set are discussed, and discernibility matrices are constructed to compute the multi-granulation reducts of GNDISs. Then, the pessimistic (or optimistic) approximations in DMSs and IDISs can be changed into the multi-granulation pessimistic (or optimistic) approximations in GNDISs. Moreover, the pessimistic multi-granulation reducts and the optimistic multi-granulation reducts of DMSs discussed in [21] can be computed by the discernibility matrices and discernibility functions based on the multi-granulation reduction theory of GNDISs. Additionally, the pessimistic multi-granulation reduction structures of IDISs discussed in [32] are characterized by the reduction theory of GNDISs based on discernibility technique.

    The remaining structure of this paper is organized as follows. In Section 2, the definitions about multi-granulation rough sets are reviewed. In Section 3, we introduce the definition of the multi-granulation rough sets in a GNDIS and discuss knowledge reduction of a GNDIS based on the multi-granulation rough sets. The discernibility matrices and discernibility functions are constructed to characterize the multi-granulation reducts of a GNDIS. In Section 4, relationships between the multi-granulation reduction of DMSs and that of GNDISs are discussed. Moreover, the optimistic lower reducts of DMSs are discussed by the discernibility matrices in this section. Section 5 explores relationships between the multi-granulation reduction of IDISs and that of GNDISs. Then, the optimistic multi-granulation reductions of IDISs from the discernibility technique are presented in this section. Section 6 concludes this study.

    In this section, we review some basic concepts about multi-granulation rough sets, which were proposed by Qian et al. [18] to approximate a target concept using multiple binary relations.

    Suppose that (U,A,d) is a DIS, in which U is the universe, A is a family of condition attributes where a:UVa for any aA, Va is the value set of a, and d:UVd is a decision attribute where Vd is the value set of d.

    In a DIS (U,A,d), A={Ak|AkA,k=1,2,,m} is a family of attribute subsets. Then, (U,A,d) is called an DMS [28]. Each AkA induces an equivalent relation RAk={(x,y)U×U|aAk(a(x)=a(y))} and generates a granular structure U/RAk={[x]Ak|xU}, in which [x]Ak={yU|(x,y)RAk}. The decision attribute d generates a partition U/Rd={[x]d|xU}{X1,X2,,Xn}, each of which is the decision class with the same decision attribute values. The pessimistic multi-granulation lower and upper approximations and optimistic multi-granulation lower and upper approximations were discussed by Qian et al.[14,18].

    Definition 1. [14] Given a DMS (U,A,d) with A={AkA|kZ,1km}, let XU. Define the pessimistic multi-granulation lower and upper approximations of X as

    AAPk_(X)={xU|([x]A1X)([x]A2X)([x]AmX)},

    ¯AAPk(X)=∼AAPk_(X).

    One calls (AAPk_(X), ¯AAPk(X)) a pessimistic multi-granulation rough set.

    For each xU, HA, denote AH[x]A by NH(x). Then, it is easy to get that AAPk_(X)={xU|NA(x)U}.

    Definition 2. [18] Given a DMS (U,A,d) with A={AkA|kZ,1km}, let XU. Define the optimistic multi-granulation lower and upper approximations of X as

    AAOk_(X)={xU|([x]A1X)([x]A2X)([x]AmX)},

    ¯AAOk(X)=∼AAOk_(X).

    (AAPk_(X), ¯AAPk(X)) is called an optimistic multi-granulation rough set.

    If some attribute values of attributes in A of a DIS (U,A,d) are are missing or unknown, then the missing attribute value is expressed by special symbol '' and (U,A,d) is termed as an IDIS. For any AkA, a tolerance relation is defined by Kryszkiewicz as [9]:

    SIM(Ak)={(x,y)U×U|aAk(a(x)=a(y)a(x)=a(y)=)}.

    A granular structure is induced by Ak as U/SIM(Ak)={SAk(x)|xU}, where SAk(x)={yU|(x,y)SIM(Ak)}.

    The multi-granulation rough sets in IDISs were introduced by Qian et al. [15].

    Definition 3. [15] Given an IDIS (U,A,d) with AI={AkA|kZ,1km}, let XU. Define the pessimistic multi-granulation lower and upper approximations of X as

    AIAPk_(X)={xU|(SA1(x)X)(SA2(x)X)(SAm(x)X)},

    ¯AIAPk(X)=∼AIAPk_(X).

    Define the optimistic multi-granulation lower and upper approximations of X

    AIAOk_(X)={xU|(SA1(x)X)(SA2(x)X)(SAm(x)X)},

    ¯AIAOk(X)=∼AIAOk_(X).

    (AIAPk_(X),¯AIAPk(X)) and (AIAOk_(X),¯AIAOk(X)) are, respectively, the pessimistic multi-granulation rough set and optimistic multi-granulation rough set of X.

    For each xU, HAI, denote AHSA(x) by INH(x). It is clear that AIAPk_(X)={xU|INAI(x)U}.

    In this section, we present the definitions of multi-granulation rough sets in GNDISs and explore multi-granulation reduction of GNDISs. Throughout this paper, the universe of discourse U is nonempty and finite. The family of all subsets of U is denoted by P(U). For XU, X is the complementary set of X.

    Some basic concepts about the neighborhood operator is presented in [27].

    Definition 4. [27] Let U be the universe. A mapping N:UP(U) is called a neighborhood operator. If xN(x) for all xU, N is a reflexive neighborhood operator. If xN(y)yN(x) for all x,yU, N is a symmetric neighborhood operator. If [yN(x),zN(y)]zN(x) for all x,y,zU, N is a transitive neighborhood operator. If the neighborhood operator N is reflexive, symmetric and transitive, N is called a Pawlak neighborhood operator.

    Clearly, {N(x)|xU} of a Pawlak neighborhood operator N forms a partition of U.

    Definition 5. Let N be a reflexive neighborhood operator on U. Denote {N(x)|xU} by CN. The ordered pair (U,N) is called a generalized neighborhood approximation space.

    Clearly, CN from (U,N) is a covering. We introduce the generalized neighborhood multi-granulation rough sets now.

    Definition 6. Let N1,N2,,Nm(m2) be reflexive neighborhood operators on U and N={N1,N2,, Nm}, Nd:UP(U) be a Pawlak neighborhood operator, then (U,N,Nd) is called a GNDIS. For XU, define the generalized neighborhood pessimistic lower approximation NNPk_(X) and pessimistic upper approximation ¯NNPk(X) by

    NNPk_(X)={xU|(N1(x)X)(N2(x)X)(Nm(x)X)},

    ¯NNPk(X)=∼NNPk_(X).

    (NNPk_(X),¯NNPk(X)) is the generalized neighborhood pessimistic multi-granulation rough set of X.

    For HN and xU, denote GNH(x)=NHN(x). Then, it is clear that NNPk_(X)={xU|GNN(x)X}.

    Proposition 1. Let (U,N,Nd) be a GNDIS, X,YU, HN and H, then:

    (1) NNPk_()=, NNPk_(U)=U, ¯NNPk()=, ¯NNPk(U)=U.

    (2) NNPk_(X)X¯NNPk(X).

    (3) XYNNPk_(X)NNPk_(Y), ¯NNPk(X)¯NNPk(Y).

    (4) NNPk_(XY)=NNPk_(X)NNPk_(Y), ¯NNPk(XY)=¯NNPk(X)¯NNPk(Y).

    (5) NNPk_(NNPk_(X))NNPk_(X), ¯NNPk(X)¯NNPk(¯NNPk(X)).

    (6) NNPk_(X)HNPk_(X), ¯HNPk(X)¯NNPk(X).

    Proof. It is verified by Definition 6.

    Definition 7. Given a GNDIS (U,N,Nd) with N={N1,N2,,Nm}, let XU. Define the generalized neighborhood optimistic lower approximation NNOk_(X) and optimistic upper approximation ¯NNOk(X) of X as

    NNOk_(X)={xU|(N1(x)X)(N2(x)X)(Nm(x)X)},

    ¯NNOk(X)=∼NNOk_(X).

    (NNOk_(X),¯NNOk(X)) is the generalized neighborhood optimistic multi-granulation rough set of X.

    Proposition 2. Let (U,N,Nd) be a GNDIS, X,YU, HN and H, then:

    (1) NNOk_()=, NNOk_(U)=U, ¯NNOk()=, ¯NNOk(U)=U.

    (2) NNOk_(X)X¯NNOk(X).

    (3) XYNNOk_(X)NNOk_(Y), ¯NNOk(X)¯NNOk(Y).

    (4) NNOk_(XY)NNOk_(X)NNOk_(Y), ¯NNOk(X)¯NNOk(Y)¯NNOk(XY).

    (5) NNOk_(NNOk_(X))NNOk_(X), ¯NNOk(X)¯NNOk(¯NNOk(X)).

    (6) HNOk_(X)NNOk_(X), ¯NNOk(X)¯HNOk(X).

    Proof. It is easy to obtain the conclusion by Definition 7.

    In a GNDIS (U,N,Nd), CNd={Nd(y)|yU} is a partition of U. In the following, the GNDIS mentioned satisfies |CNd|2. In this subsection, we present pessimistic multi-granulation reduction of GNDISs.

    Definition 8. Given a GNDIS (U,N,Nd), let HN.

    (1) If NNPk_(Nd(y))=HNPk_(Nd(y)) for all yU, then we say that H is a generalized neighborhood pessimistic lower consistent set (GNPL-consistent set). Denote the family of all GNPL-consistent sets by ConsPL(N). If HConsPL(N), and HConsPL(N) whenever HH, then H is called a GNPL-reduct. Denote the set of all GNPL-reducts by RedPL(N), the core w.r.t. GNPL-reducts is defined as CorePL(N)={H|HRedPL(N)}.

    (2) If ¯NNPk(Nd(y))=¯HNPk(Nd(y)) for all yU, then we say that H is a generalized neighborhood pessimistic upper consistent set (GNPU-consistent set). Denote the family of all GNPU-consistent sets by ConsPU(N). If HConsPU(N), and HConsPU(N) whenever HH, then H is called a GNPU-reduct. Denote the set of all GNPU-reducts as RedPU(N), the core w.r.t. GNPU-reducts is defined by CorePU(N)={H|HRedPU(N)}.

    From Definition 8, we can see that a GNPL-reduct (or a GNPU-reduct) is a minimal subset of N, which preserves the pessimistic lower approximations (or the pessimistic upper approximations) of all sets in CNd. The pessimistic lower and upper reducts of a GNDIS are different, which are illustrated by an example in the following.

    Example 1. (1) A GNDIS (U,N,Nd) is presented in Table 1, where U={x1,x2,,x6} and N={N1,N2,,N5}. The generalized neighborhood granules of xU are presented in Table 2.

    Table 1.  A GNDIS.
    x1 x2 x3 x4 x5 x6
    N1(xi) {x1,x2} {x2,x3,x4} {x3,x5} {x1,x4} {x2,x4,x5} {x2,x6}
    N2(xi) {x1,x3} {x2,x5} {x2,x3} {x1,x3,x4} {x3,x5} {x1,x6}
    N3(xi) {x1,x2} {x2,x4} {x3,x6} {x1,x2,x4} {x4,x5} {x3,x6}
    N4(xi) {x1,x2} {x2,x3} {x2,x3} {x1,x2,x4} {x2,x5} {x1,x6}
    N5(xi) {x1,x3} {x2,x4} {x3,x6} {x1,x4} {x3,x5} {x3,x6}
    Nd(xi) {x1,x2,x3} {x1,x2,x3} {x1,x2,x3} {x4,x5} {x4,x5} {x6}

     | Show Table
    DownLoad: CSV
    Table 2.  The generalized neighborhood granules of xU w.r.t. N in Example 1.
    x1 x2 x3 x4 x5 x6
    GNN(xi) {x1,x2,x3} {x2,x3,x4,x5} {x2,x3,x5,x6} {x1,x2,x3,x4} {x2,x3,x4,x5} {x1,x2,x3,x6}

     | Show Table
    DownLoad: CSV

    By Definition 6, we get that NNPk_({x1,x2,x3})={x1}, NNPk_({x4,x5})=, NNPk_({x6})=, ¯NNPk({x1,x2,x3})=U, ¯NNPk({x4,x5})={x2,x3,x4,x5}, ¯NNPk({x6})={x3,x6}.

    Let H={N3,N4}. We compute that HNPk_(Nd(xi))=NNPk_(Nd(xi)) for i=1,2,,6. Thus, H is a GNPL-consistent set. Let H1={N3}. We get that H1NPk_({x4,x5})={x5}NNPk_({x4,x5}), which follows that H1 is not a GNPL-consistent set. Let H2={N4}. We obtain that H2NPk_({x1,x2,x3})={x1,x2,x3}NNPk_({x1,x2,x3}), which implies that H2 is not a GNPL-consistent set. So H is a GNPL-reduct. Due to ¯HNPk({x4,x5})={x2,x4,x5}¯NNPk({x4,x5}), H is not a GNPU-consistent set, then H is not a GNPU-reduct. At the same time, we get a conclusion: A GNPL-reduct is not necessarily a GNPU-reduct.

    (2) The operator Nd:UP(U) in (1) is changed by: Nd(x1)=Nd(x2)=Nd(x3)={x1,x2,x3}, Nd(x4)=Nd(x5)=Nd(x6)={x4,x5,x6}. Then, we get another GNDIS (U,N,Nd). By Definition 6, we have that NNPk_({x1,x2,x3})={x1}, NNPk_({x4,x5,x6})=, ¯NNPk({x1,x2,x3})=U, ¯NNPk({x4,x5,x6})={x2,x3,x4,x5,x6}.

    Let H={N2}. We obtain that ¯HNPk(Nd(xi))=¯NNPk(Nd(xi)) for i=1,2,,6. Thus, H is a GNPU-reduct. It follows from HNPk_({x1,x2,x3})={x1,x3}¯NNPk({x1,x2,x3}) that H is not a GNPL-consistent set, then H is not a GNPL-reduct. Hence, we obtain a conclusion: A GNPU-reduct is not necessarily a GNPL-reduct.

    A matrix is constructed to compute all the GNPL-reducts.

    Definition 9. Consider that (U,N,Nd) is a GNDIS, where N={N1,N2,,Nm}. Letting xU,Nd(y)CNd, define

    GDPL(x,Nd(y))={{NkN|Nk(x)Nd(y)},GNN(x)Nd(y),N,else.

    GDPL={GDPL(x,Nd(y))|xU,Nd(y)CNd} is called a GNPL-discernibility matrix (GNPL-D matrix) of (U,N,Nd).

    Proposition 3. Let (U,N,Nd) be a GNDIS, whose GNPL-D matrix is GDPL={GDPL(x,Nd(y))|xU,Nd(y)CNd}. Then,

    (1) xU, GDPL(x,Nd(x)).

    (2) xU, Nd(y)CNd with xNd(y), GDPL(x,Nd(y))=N.

    Proof. (1) xU, if GNN(x)Nd(x), then there is an NkN satisfying Nk(x)Nd(x). Hence NkGDPL(x,Nd(x)), which implies that GDPL(x,Nd(x)). If GNN(x)Nd(x), by Definition 9, then GDPL(x,Nd(x))=N.

    (2) For any xU, Nd(y)CNd with xNd(y), we get that GNN(x)Nd(y) and Nk(x)Nd(y) for all NkN. Thus, GDPL(x,Nd(y))={NkN|Nk(x)Nd(y)}=N.

    By Proposition 3, xU, Nd(y)CNd, GDPL(x,Nd(y)). Utilizing the GNPL-D matrix, the GNPL-reducts can be characterized.

    Theorem 1. Suppose that (U,N,Nd) is a GNDIS, where N={N1,N2,,Nm}. Letting HN and NkN,

    (1) HConsPL(N)HGDPL(x,Nd(y)) for all xU,Nd(y)CNd.

    (2) HRedPL(N)HGDPL(x,Nd(y)) for all xU,Nd(y)CNd, and for any H0H, there exists a GDPL(x,Nd(y)) such that GDPL(x,Nd(y))H0=.

    (3) Nk CorePL(N) xU,Nd(y)CNd, GDPL(x,Nd(y))={Nk}.

    Proof. (1) "". xU, Nd(y)CNd, if GNN(x)Nd(y), then xNNPk_(Nd(y)). Since H is a GNPL-consistent set, NNPk_(Nd(y))=HNPk_(Nd(y)). Hence, xHNPk_(Nd(y)). It implies that GNH(x)=NkHNk(x)Nd(y). Then, we can find an NkH such that Nk(x)Nd(y). Therefore, HGDPL(x,Nd(y)). If GNH(x)Nd(y), then GDPL(x,Nd(y))=N. It is clear that HGDPL(x,Nd(y)).

    "". yU, by Proposition 1(6), NNPk_(Nd(y))HNPk_(Nd(y)). xNNPk_(Nd(y)), we get that GNN(x)Nd(y). Since HGDPL(x,Nd(y)), let NkHGDPL(x,Nd(y)). Thus, according to Definition 9, Nk(x)Nd(y). It follows that GNH(x)=NkHNk(x)Nd(y). Therefore, xHNPk_(Nd(y)), which implies that HNPk_(Nd(y))NNPk_(Nd(y)). We can get that NNPk_(Nd(y))=HNPk_(Nd(y)).

    (2) It is verified from (1).

    (3) "". If not, for every GDPL(x,Nd(y))GDPL satisfying NkGDPL(x,Nd(y)), we have |GDPL(x,Nd(y))|2. Let H={GDPL(x,Nd(y)){Nk}|x,yU}, then H is a GNPL-consistent set. Thus, there exists a GNPL-reduct H0H and NkH0, which contradicts the fact that Nk CorePL(N).

    "". If not, we can find a reduct H such that NkH. Since GDPL(x,Nd(y))={Nk}, we obtain that yGNN(x), yNk(x) and yNl(x) for all lk(l{1,2,,m}). Then, ylkNl(x). Since NkH, GNH(x)lkNl(x). It implies that yGNH(x). Hence GNN(x)GNH(x), which contradicts the fact that H is a GNPL-consistent set.

    Definition 10. Let (U,N,Nd) be a GNDIS, whose GNPL-D matrix is GDPL={GDPL(x,Nd(y))|xU,Nd(y)CNd}. Define f(GDPL)={GDPL(x,Nd(y))|GDPL(x,Nd(y))GDPL}.

    GDPL(x,Nd(y)) is the disjunction of all neighborhood operators in GDPL(x,Nd(y)), and {GDPL(x,Nd(y))|GDPL(x,Nd(y))GDPL} is the conjunction of GDPL(x,Nd(y)).

    Theorem 2. Let H={N1,N2,,Nk}N. HRedPL(N)N1N2Nk is a prime implicant of f(GDPL).

    Proof. It is trivial based on Definition 10.

    Remark 1. The GDPL in Definition 9 can be simplified as (GDPL)={GDPL(x,Nd(x))|xU}.

    In fact, due to Proposition 3, for any xU and Nd(y)CNd with xNd(y), GDPL(x,Nd(x))GDPL(x,Nd(y))=N. Then, by Definition 10, f((GDPL))=f(GDPL).

    We employ Example 2 below to explain the discernibility method for calculating all the GNPL-reducts of a GNDIS.

    Example 2. Continued from Example 1(1). By Definition 9, we obtain that

    GDPL=(GDPL(xi,Nd(xj))Nd(x1)=Nd(x2)=Nd(x3)Nd(x4)=Nd(x5)Nd(x6)x1NNNx2{N1,N2,N3,N5}NNx3{N1,N3,N5}NNx4NNNx5N{N1,N2,N4,N5}Nx6NNN).

    From Remark 1, we have that

    (GDPL)=(xiGDPL(xi,Nd(xi))x1Nx2{N1,N2,N3,N5}x3{N1,N3,N5}x4Nx5{N1,N2,N4,N5}x6N).

    By Theorem 1(3), CorePL(N)=. According to Definition 10, f(GDPL)=f((GDPL))=(N1N2N3N5)(N1N3N5)(N1N2N4N5)(N1N2N3N4N5)=(N1)(N2N3)(N3N4)(N5). Then, {N1}, {N2,N3}, {N3,N4} and {N5} are GNPL-reducts.

    By the analysis above, we present Algorithm 1 to calculate all the GNPL-reducts of a GNDIS. In Algorithm 1, the time complexity of Steps 1–11 is O(|U|2|N|), and the time complexity of Steps 2–17 is O(GDGDPL|GD|). The total time complexity of Algorithm 1 is O(|U|2|N|+GDGDPL|GD|).

    Algorithm 1 A logic algorithm for calculating all the GNPL-reducts of a GNDIS
    Input: A GNDIS (U,N,Nd) with U={x1,x2,,xn} and N={N1,N2,, Nm}
    Output: All the GNPL-reducts RedPL(N)
    1: for i=1:n do
    2:   Initialize GDPL(xi,Nd(xi));
    3:   for k=1:m do
    4:     if Nk(xi)Nd(xi) then
    5:       GDPL(xi,Nd(xi))GDPL(xi,Nd(xi)){Nk}
    6:     end if
    7:   end for
    8:   if GDPL(xi,Nd(xi))= then
    9:     GDPL(xi,Nd(xi))N
    10:   end if
    11: end for
    12: Initialize RedPL(N);
    13: for i=1:n do
    14:   RedPL(N)RedPL(N)(GDPL(xi,Nd(xi)))
    15: end for
    16: Compute RedPL(N)tl=1(slk=1)Nk;
    17: Return RedPL(N).

    We construct a discernibility matrix to get all the GNPU-reducts as follows:

    Definition 11. Suppose that (U,N,Nd) is a GNDIS, where N={N1,N2,,Nm}. Letting xU and Nd(y)CNd, define

    GDPU(x,Nd(y))={{NkN|Nk(x)Nd(y)},GNN(x)Nd(y),,else.

    GDPU={GDPU(x,Nd(y))|xU,Nd(y)CNd} is called a GNPU-discernibility matrix (GNPU-D matrix) of (U,N,Nd).

    Proposition 4. xU, GDPU(x,Nd(x))=N.

    Proof. xU, GNN(x)Nd(x), then GDPU(x,Nd(x))={NkN|Nk(x)Nd(x)}=N.

    Theorem 3. Consider that (U,N,Nd) is a GNDIS, where N={N1,N2,,Nm}. Letting HN and NkN,

    (1) HConsPU(N)HGDPU(x,Nd(y)) for all GDPU(x,Nd(y)).

    (2) HRedPU(N)HGDPU(x,Nd(y)) for all GDPU(x,Nd(y)), and for any H0H, there exists a GDPU(x,Nd(y)) such that GDPU(x,Nd(y))H0=.

    (3) Nk CorePU(N) xU,Nd(y)CNd, GDPU(x,Nd(y))={Nk}.

    Proof. (1) "". xU,Nd(y)CNd, if GDPU(x,Nd(y)), then GNN(x)Nd(y), which follows that x¯NNPk(Nd(y)). Since H is a GNPU-consistent set, ¯NNPk(Nd(y))=¯HNPk(Nd(y)). It implies that x¯HNPk(Nd(y)). Hence, there is an NkH satisfying Nk(x)Nd(y). By Definition 11, NkGDPU(x,Nd(y)). Thus, HGDPU(x,Nd(y)).

    "". yU, by Proposition 1(6), ¯HNPk(Nd(y))¯NNPk(Nd(y)). x¯NNPk(Nd(y)), GNN(x)Nd(y). It follows from HGDPU(x,Nd(y)) that there exists an NkN such that NkHGDPU(x,Nd(y)). Hence Nk(x)Nd(y), which implies that GNH(x)Nd(y). Then, x¯HNPk(Nd(y)). So ¯NNPk(Nd(y))¯HNPk(Nd(y)).

    (2) It is easy to obtain (2) by (1).

    (3) Similar to the proof of (3) in Theorem 1.

    Definition 12. Let (U,N,Nd) be a GNDIS, whose GNPU-D matrix is GDPU={GDPU(x,Nd(y))|xU,Nd(y)CNd}. Define f(GDPU)={GDPU(x,Nd(y))|GDPU(x,Nd(y))GDPU,GDPU(x,Nd(y))}.

    Theorem 4. Let H={N1,N2,,Nk}N. HConsPU(N)N1N2Nk is a prime implicant of f(GDPU).

    Proof. It is clear based on Definition 12.

    By Theorem 4, the set of all GNPU-reducts in a GNDIS and the set of all prime implicants of f(GDPU) are the one-to-one correspondence. Example 3 is employed to illustrate the above theorems.

    Example 3. Continued from Example 1(1). By Definition 11, we get that

    GDPU=(GDPU(xi,Nd(xj))Nd(x1)=Nd(x2)=Nd(x3)Nd(x4)=Nd(x5)Nd(x6)x1Nx2N{N1,N2,N3,N5}x3N{N1}{N3,N5}x4NNx5{N1,N2,N4,N5}Nx6NN).

    According to Theorem 3, CorePU(N)={N1}. By Definition 12, f(GDPU)=(N1)(N3N5)(N1N2N4N5)(N1N2N3N5)(N1N2N3N4N5)=(N1N3)(N1N5).

    Then, {N1,N3} and {N1,N5} are GNPU-reducts.

    In this subsection, we discuss optimistic multi-granulation reduction of GNDISs.

    Definition 13. Let (U,N,Nd) be a GNDIS.

    (1) HN is a generalized neighborhood optimistic lower consistent set (GNOL-consistent set) if NNOk_(Nd(y))=HNOk_(Nd(y)) for every yU. Denote the family of all GNOL-consistent sets by ConsOL(N). If HConsOL(N), and HConsOL(N) whenever HH, then H is said to be a GNOL-reduct. Denote the set of all GNOL-reducts as RedOL(N), the core w.r.t. GNOL-reducts is defined by CoreOL(N)={H|HRedOL(N)}.

    (2) HN is a generalized neighborhood optimistic upper consistent set (GNOU-consistent set) if ¯NNOk(Nd(y))=¯HNOk(Nd(y)) for all yU. Denote the family of all GNOU-consistent sets by ConsOU(N). If HConsOU(N), and HConsOU(N) whenever HH, then H is said to be a GNOU-reduct. Denote the set of all GNOU-reducts by RedOU(N), the core w.r.t. GNOU-reducts is defined as CoreOU(N)={H|HRedOU(N)}.

    By Definition 13, a GNOL-reduct (or GNOU-reduct) is a minimal subset of N that maintains the optimistic lower approximations (or optimistic upper approximations) of all Nd(y)CNd. The GNOL-reduct and GNOU-reduct are different as illustrated by the next example.

    Example 4. Continued from Example 1(1). Change the Pawlak neighborhood operator Nd:UP(U) in Example 1(1) by: Nd(x1)=Nd(x3)={x1,x3}, Nd(x2)=Nd(x4)=Nd(x5)=Nd(x6)={x2,x4, x5,x6}. Then, we get a new GNDIS (U,N,Nd). According to Definition 7, NNOk_({x1,x3})={x1}, NNOk_({x2,x4,x5,x6})={x2,x5,x6}, ¯NNOk({x1,x3})={x1,x3,x4,x6}, ¯NNOk({x2,x4,x5,x6})={x2,x4,x5,x6}.

    Let H={N1,N5}. We compute that HNOk_(Nd(xi))=NNOk_(Nd(xi)) for i=1,2,,6. Thus, H is a GNOL-consistent set. Let H1={N1}. We get that H1NOk_({x1,x3})=NNOk_({x1,x3}), which follows that H1 is not a GNOL-consistent set. Let H2={N5}. We obtain that H2NOk_({x2,x4,x5,x6})={x2}NNOk_({x2,x4,x5,x6}), which implies that H2 is not a GNOL-consistent set. So, H is a GNOL-reduct. Due to ¯HNPk({x4,x5})={x2,x4,x5}¯NNPk({x4,x5}), H is not a GNOU-consistent set, then H is not a GNOU-reduct.

    Let H={N2,N3}. We obtain that ¯HNOk(Nd(xi))=¯NNOk(Nd(xi)) for i=1,2,,6. Thus, H is a GNOU-consistent set. Let H1={N2}. Then, ¯H1NOk({x1,x3})={x1,x3,x4,x5,x6}¯NNOk({x1,x3}), which implies that H1 is not a GNOU-consistent set. Let H2={N3}. Then, ¯H2NOk({x2,x4,x5,x6})=U¯NNOk({x1,x3}), which follows that H2 is not a GNOU-consistent set. Hence, H is a GNOU-reduct. It follows from HNOk_({x2,x4,x5,x6})={x2,x5}NNOk_({x2,x4,x5,x6}) that H is not a GNOL-consistent set, then H is not a GNOL-reduct.

    Hence, we get that a GNOU-reduct is not necessarily a GNOL-reduct, and a GNOL-reduct is not necessarily a GNOU-reduct. In the following, we calculate GNOL-reducts and GNOL-reducts of a GNDIS by the discernibility technique.

    Definition 14. Consider a GNDIS (U,N,Nd), where N={N1,N2,,Nm}. xU,Nd(y)CNd, and define

    GDOL(x,Nd(y))={{NkN|Nk(x)Nd(y)},xNNOk_(Nd(y)),N,else.

    GDOL={GDOL(x,Nd(y))|xU,Nd(y)CNd} is called a GNOL-discernibility matrix (GNOL-D matrix) of (U,N,Nd).

    Theorem 5. In a GNDIS (U,N,Nd) with N={N1,N2,,Nm}, let HN, NkN, then

    (1) HConsOL(N)HGDOL(x,Nd(y)) for each GDOL(x,Nd(y))GDOL.

    (2) HRedOL(N)HGDPL(x,Nd(y)) for all GDOL(x,Nd(y))GDOL, and for any H0H, there exist some GDOL(x,Nd(y))GDOL such that GDOL(x,Nd(y))H0=.

    (3) Nk CoreOL(N) xU,Nd(y)CNd, GDOL(x,Nd(y))={Nk}.

    Proof. (1) "". xU,Nd(y)CNd, if xNNOk_(Nd(y))=HNOk_(Nd(y)), according to Definition 7, we can find an NkH such that Nk(x)Nd(y). Thus, NkGDOL(x,Nd(y)). It follows that HGDOL(x,Nd(y)). If xNNOk_(Nd(y)), then GDOL(x,Nd(y))=N. It is clear that HGDPL(x,Nd(y)).

    "". yU, by Proposition 2(6), HNOk_(Nd(y))NNOk_(Nd(y)). xNNOk_(Nd(y)), we obtain that GDOL(x,Nd(y)). Since HGDOL(x,Nd(y)), let NkHGDOL(x,Nd(y)), then Nk(x)Nd(y). Hence, xNNOk_(Nd(y)). It implies that NNOk_(Nd(y))HNOk_(Nd(y)).

    (2) It is verified by (1).

    (3) With the reference to the proof of (3) in Theorem 1.

    Proposition 5. xU,Nd(y)CNd, if xNd(y), then GDOL(x,Nd(y))=N.

    Proof. Nd(y)CNd, NNOk_(Nd(y))Nd(y). If xNd(y), then xNNOk_(Nd(y)). Hence, according to Definition 14, GDOL(x,Nd(y))=N.

    Remark 2. The GDOL in Definition 14 can be simplified as (GDOL)={GDOL(x,Nd(x))|xU}.

    In fact, by Proposition 5, for every xU and Nd(y)CNd with xNd(y), GDPL(x,Nd(y))=N. Then, the GDOL in Theorem 5 can be changed into (GDOL).

    Definition 15. Assume that (U,N,Nd) is a GNDIS, whose GNOL-D matrix is (GDOL)={GDOL(x,Nd(x))|xU}. Define f((GDOL))={GDOL(x,Nd(x))|xU}.

    Theorem 6. Let H={N1,N2,,Nk}N. HRedOL(N)N1N2Nk is a prime implicant of f((GDOL)).

    Proof. It can be obtained by Definition 15.

    By means of Theorem 6, all GNOL-reducts of a GNDIS can be obtained by f((GDOL)). An algorithm for calculating all the GNOL-reducts of a GNDIS is presented as Algorithm 2. The total time complexity of Algorithm 2 is O(|U|2|N|+GD(GDOL)|GD|). Example 5 is employed to state the calculation process.

    Algorithm 2 A logic algorithm for computing all the GNOL-reducts of a GNDIS
    Input: A GNDIS (U,N,Nd) with U={x1,x2,,xn} and N={N1,N2,, Nm}
    Output: All the GNPL-reducts RedOL(N)
    1: for i=1:n do
    2:   Initialize GDOL(xi,Nd(xi));
    3:   for k=1:m do
    4:     if Nk(xi)Nd(xi) then
    5:       GDOL(xi,Nd(xi))GDOL(xi,Nd(xi)){Nk}
    6:     end if
    7:   end for
    8:   if GDOL(xi,Nd(xi))= then
    9:     GDOL(xi,Nd(xi))N
    10:   end if
    11: end for
    12: Initialize RedOL(N);
    13: for i=1:n do
    14:   RedOL(N)RedOL(N)(GDOL(xi,Nd(xi)))
    15: end for
    16: Compute RedOL(N)tl=1(slk=1)Nk;
    17: Return RedOL(N).

    Example 5. Continued from Example 1(1). We have that NNOk_({x1,x2,x3})={x1,x2,x3}, NNOk_({x4,x5})={x5}, NNOk_({x6})=. According to Definition 14, we get that

    GDOL=(GDOL(xi,Nd(xj))Nd(x1)=Nd(x2)=Nd(x3)Nd(x4)=Nd(x5)Nd(x6)x1NNNx2{N4}NNx3{N2,N4}NNx4NNNx5N{N3}Nx6NNN),

    and

    (GDOL)=(xiGDOL(xi,Nd(xi))x1Nx2{N4}x3{N2,N4}x4Nx5{N3}x6N).

    By Definition 15, f((GDOL))=(N2N4)(N4)(N3)(N1N2N3N4N5)=N3N4.

    Hence, {N3,N4} is the GNOL-reduct. We also get that CoreOL(N)={N3,N4}.

    Now, we construct a discernibility matrix to calculate GNOU-reducts.

    Definition 16. Let (U,N,Nd) be a GNDIS, where N={N1,N2,,Nm}. xU,Nd(y)CNd, define

    GDOU(x,Nd(y))={{NkN|Nk(x)Nd(y)=},x¯NNOk(Nd(y)),N,else.

    GDOU={GDOU(x,Nd(y))|xU,Nd(y)CNd} is called a GNOU-discernibility matrix (GNOU-D matrix) of (U,N,Nd).

    It is easy to get that GDOU(x,Nd(y)) for all xU,Nd(y)CNd.

    Theorem 7. Given a GNDIS (U,N,Nd), let HN and NkN, then

    (1) HConsOU(N)HGDOU(x,Nd(y)) for all GDOU(x,Nd(y))GDOU.

    (2) HRedOU(N)HGDOU(x,Nd(y)) for all GDOU(x,Nd(y)), and for any H0H, there exists a GDOU(x,Nd(y))GDOU such that GDOU(x,Nd(y))H0=.

    (3) NkCoreOU(N) xU,Nd(y)CNd, GDOU(x,Nd(y))={Nk}.

    Proof. (1) "". xU, Nd(y)CNd, if x¯NNOk(Nd(y)), we get that x¯HNOk(Nd(y)). Then, NkH, Nk(x)Nd(y)=. It implies that NkGDOU(x,Nd(y)). Therefore, HGDOU(x,Nd(y)). If x¯NNOk(Nd(y)), then GDOU(x,Nd(y))=N. It is verified that HGDOU(x,Nd(y)).

    "". yU, by Proposition 2(6), ¯NNOk(Nd(y))¯HNOk(Nd(y)). For any x¯NNOk(Nd(y)), GDOU(x,Nd(y)). Due to HGDOU(x,Nd(y)), let NkHGDOU(x,Nd(y)). Then, Nk(x)Nd(y)=. Thus, x¯HNOk(Nd(y)). It follows that ¯HNOk(Nd(y))¯NNOk(Nd(y)). We conclude that ¯HNOk(Nd(y))=¯NNOk(Nd(y)).

    (2) It is easy to obtain (2) by (1).

    (3) By reference to the proof of (3) in Theorem 1.

    Definition 17. Let (U,N,Nd) be a GNDIS, whose GNOU-D matrix is GDOU={GDOU(x,Nd(y))|xU,Nd(y)CNd}. Define f(GDOU)={GDOU(x,Nd(y))|GDOU(x,Nd(y))GDOU}.

    Theorem 8. Let H={N1,N2,,Nk}N. HRedOU(N)N1N2Nk is a prime implicant of f(GDOU).

    Proof. It is trivial based on Definition 17.

    By Theorem 8, all the GNOU-reducts can be obtained by the conjunctive and disjunctive operations of GDOU.

    Example 6. Continued from Example 2. We have that ¯NNOk({x1,x2,x3})={x1,x2,x3,x4,x6}, ¯NNOk({x4,x5})={x4,x5}, ¯NNOk({x6})={x6}. By Definition 16, we deduce that

    GDOU=(GDOU(xi,Nd(xj))Nd(x1)=Nd(x2)=Nd(x3)Nd(x4)=Nd(x5)Nd(x6)x1NNNx2N{N4}Nx3N{N2,N3,N4,N5}{N1,N2,N4}x4NNNx5{N3}NNx6NNN).

    Then, f(GDOU)=(N2N3N4N5)(N4)(N1N2N4)(N3)(N1N2N3N4N5)=N3N4.

    It follows from Theorem 8 that {N3,N4} is the one and only one GNOU-reduct.

    Remark 3. (1) There is no necessary association between the GNPL-reduct and GNOL-reduct.

    From Examples 2 and 5, {N5} is a GNPL-reduct. However, {N5} is not a GNOL-reduct.

    Continued from Example 1(1). A Pawlak neighborhood operator Nd:UP(U) is defined by: Nd(x1)=Nd(x6)={x1,x6}, Nd(x2)=Nd(x3)=Nd(x4)=Nd(x5)={x2,x3, x4,x5}. Then, we get a GNDIS (U,N,Nd). By Definition 7, NNOk_({x1,x6})={x6}, NNOk_({x2,x3,x4,x5})={x2,x3,x5}. Let H={N2}. Then NNOk_(Nd(xi))=HNOk_(Nd(xi))(i=1,2,,6), which follows that H is a GNOL-reduct. Since NNPk_({x2,x3,x4,x5})={x2,x5} and NNPk_({x2,x3,x4,x5})={x2,x3,x5}, H is not a GNPL-reduct.

    (2) There is no necessary association between the GNPU-reduct and GNOU-reduct.

    By Examples 3 and 6, {N1,N3} is a GNPU-reduct but not a GNOU-reduct, and {N3,N4} is a GNOU-reduct instead of a GNPU-reduct.

    In a DMS (U,A,d) with A={AkA|kZ,1km} and U/Rd={[y]d|yU}, define a mapping Nk:UP(U) by Nk(x)=[x]Ak for all xU(k=1,2,,m) and a mapping Nd:UP(U) by Nd(y)=[y]d for all yU. Thus, Nk(k=1,2,,m) is a reflexive neighborhood operator on U and Nd is a Pawlak neighborhood operator, and we get a GNDIS (U,NC,Nd) with NC={N1,N2,,Nm}, which is a GNDIS induced by the DMS (U,A,d). Define a mapping I:ANC by I(Ak)=Nk for all AkA. From the definition of NC, it is easy to get that I is a bijection.

    Proposition 6. Suppose that (U,A,d) is a DMS and A={AkA|kZ,1km}, which induces the GNDIS (U,NC,Nd) with NC={N1,N2,,Nm}. Then, for each XU,

    AAPk_(X)=NCNPk_(X), ¯AAPk(X)=¯NCNPk(X),

    AAOk_(X)=NCNOk_(X), ¯AAOk(X)=¯NCNOk(X).

    Proof. Since Nk(x)=[x]Ak for all xU(k=1,2,,m), it is directly according to Definitions 6 and 7, Definitions 1 and 2.

    From Proposition 6, the generalized neighborhood pessimistic rough set model in Definition 6 is a general model of the pessimistic multi-granulation rough set model defined in [14], and the generalized neighborhood optimistic approximations in Definition 7 is an expansion of the optimistic multi-granulation approximations proposed in [18].

    Pessimistic multi-granulation reduction of DMSs are explored in [14,18,21].

    Definition 18. [14,18,21] Assume that (U,A,d) is a DMS, where A={AkA|kZ,1km}. Let HA and H.

    (1) H is called a complete pessimistic lower consistent set (CPL-consistent set) if AAPk_([y]d)=HAPk_([y]d) for all yU. Denote the family of all CPL-consistent sets by ConsPL(A). Moreover, if HConsPL(A), and HConsPL(A) whenever HH, then H is a CPL-reduct of (U,A,d). Denote the family of all CPL-reducts of (U,A,d) by RedPL(A), and CorePL(A)=HRedPL(A)H is said to be a CPL-core.

    (2) H is called a complete pessimistic upper consistent set (CPU-consistent set) if ¯AAPk([y]d)=¯HAPk([y]d) for all yU. Denote the family of all CPU-consistent sets by ConsPU(A). Moreover, if HConsPU(A), and HConsPU(A) whenever HH, then H is a CPU-reduct of (U,A,d). Denote the family of all CPU-reducts of (U,A,d) by RedPU(A), and CorePU(A)=HRedPU(A)H is said to be a CPU-core.

    The relationships between the pessimistic multi-granulation reduction of (U,A,d) and the pessimistic multi-granulation reduction of the GNDIS (U,NC,Nd) induced by (U,A,d) are presented as follows:

    Theorem 9. Let (U,A,d) be a DMS with A={AkA|kZ,1km}, which induces the GNDIS (U,NC,Nd) with NC={N1,N2,,Nm}. Then, for HA, AkA,

    (1) HConsPL(A)I(H)ConsPL(NC).

    (2) HRedPL(A)I(H)RedPL(NC).

    (3) AkCorePL(A)I(Ak)CorePL(NC).

    (4) HConsPU(A)I(H)ConsPU(NC).

    (5) HRedPU(A)I(H)RedPU(NC).

    (6) AkCorePU(A)I(Ak)CorePU(NC).

    Proof. (1) Due to Definitions 8 and 18, and Proposition 6,

    HConsPL(A)AAPk_([y]d)=HAPk_([y]d) for all yU

    NCNPk_(Nd(y))=I(H)NPk_(Nd(y)) for all yU

    I(H)ConsPL(NC).

    (2) and (3). According to (1), Definitions 8 and 18, the conclusions are obtained.

    (4)–(6). Similar to the proof of (1)–(3), the conclusions can be obtained by Definitions 8 and 18, and Proposition 6.

    To characterize the knowledge reduction of DMSs, discernibility matrices are designed by Tan el al. [21]. For any HA, define the decision function by fH(xi)={d(xj)|xjNH(xi)}. For each xU, define

    P(x)={{AkA||f{Ak}(x)|>1},|fA(x)|>1,A,|fA(x)|=1.

    P={P(x)|xU} is called a CPL-discernibility matrix. For any (x,y)U×U, define

    Q(x,y)={{AkA|d(x)f{Ak}(y)},|fA(y)|>1,A,|fA(y)|=1.

    Q={Q(x,y)|xU} is called a CPU-discernibility matrix.

    Due to Definitions 9 and 11, and Theorems 1 and 3, we obtain

    Corollary 1. [21] For any HA, AkA,

    (1) HConsPL(A)HP(x) for all xU.

    (2) HRedPL(A)HP(x) for all xU, and for every H0H, there exists an xU such that P(x)H0=.

    (3) AkCorePL(A)xU, P(x)={Ak}.

    (4) HConsPU(A)HQ(x,y) for all Q(x,y).

    (5)HRedPU(A)HQ(x,y) for all Q(x,y), and for every H0H, there exists a Q(x,y)Q such that Q(x,y)H0=.

    (6) AkCorePU(A) there exist some (x,y)U×U such that Q(x,y)={Ak}.

    Proof. (1)–(3). According to Theorem 9, H is a CPL-consistent set (CPL-reduct) of (U,A,d)I(H) is a GNPL-consistent set (GNPL-reduct) of (U,NC,Nd). By Property 4 in [21], for any xU, HA, |fH(x)|>1 if NH(x)[x]d. Then, by Definition 9, I(P(x))=GDPL(x,Nd(x)) for all xU.

    According to Theorem 1, HConsPL(A)I(H)ConsPL(NC)I(H)GDPL(x,Nd(x)) for each xUHP(x) for each xU. Then, we get (1). We can obtain (2) and (3) similarly.

    (4)–(6). By Property 4 in [21], for any xU, |fA(x)|>1NA(x)[x]dGNNC(x)Nd(x).

    For any x,yU,d(y)f{Ak}(x)={d(z)|z[x]Ak}

    z[x]Ak,d(y)=d(z)z[x]Ak,[y]d=[z]dzU,z[x]Ak[y]d[x]Ak[y]d,

    then {AkA|d(y)f{Ak}(x)}={AkA|[x]Ak[y]d}. Hence,

    Q(y,x)={{AkA|[x]Ak[y]d},NA(x)[x]d,A,NA(x)[x]d.

    For any x,yU, there are four cases. (a) GNNC(x)Nd(x) and GNNC(x)Nd(y), that is, NA(x)[x]d and NA(x)[y]d. By Definition 11, I(Q(y,x))=GDPU(x,Nd(y)). (b) GNNC(x)Nd(x) and GNNC(x)Nd(y)=, namely, NA(x)[x]d and NA(x)[y]d=. From Definition 11, I(Q(y,x))==GDPU(x,Nd(y)). (c) GNNC(x)Nd(x) and GNNC(x)Nd(y), that is, NA(x)[x]d and NA(x)[y]d. Then Nd(x)Nd(y), which follows that Nd(x)=Nd(y). According to Definition 11, I(Q(y,x))=I(A)=NC=GDPU(x,Nd(x))=GDPU(x,Nd(y)). (d) GNNC(x)Nd(x) and GNNC(x)Nd(y)=, i.e., NA(x)[x]d and NA(x)[y]d=. By Definition 11, I(Q(y,x))=I(A)=NC and GDPU(x,Nd(y))=. In conclusion, {I(Q(y,x))|x,yU}={GDPU(x,Nd(y))|x,yU}.

    Due to Theorem 3, HConsPU(A)I(H)ConsPU(NC)I(H)GDPU(x,Nd(y)) for each GD_{U}^{P}(x, N_{d}(y))\neq\emptyset \; \Leftrightarrow \; I(\mathcal{H}) \cap I(Q(x, y))\neq\emptyset for each I(Q(x, y))\neq\emptyset \; \Leftrightarrow \; \mathcal{H} \cap Q(x, y)\neq\emptyset for each Q(x, y)\neq\emptyset . Hence, (4) is found, and (5) and (6) are also obtained by Theorem 3.

    Remark 4. Let (U, \mathcal{A}, d) be a DMS with \mathcal{A} = \{A_{k}\subseteq A|k\in\mathbb{Z}, 1\leq k \leq m\} , which induces a GNDIS (U, \mathcal{N}^{C}, N_{d}) with \mathcal{N}^{C} = \{N_{1}, N_{2}, \; \cdots, N_{m}\} . From the proof of Corollary 1, \{I(Q(y, x))|x, y\in U\} = \{GD_{U}^{P}(x, N_{d}(y))|x\in U, N_{d}(y)\in C_{d}\} . However, the matrix \{GD_{U}^{P}(x, N_{d}(y))|x\in U, N_{d}(y)\in C_{d}\} merges the same elements and has more empty sets in comparison with \{I(Q(y, x))|x, y\in U\} .

    Optimistic multi-granulation reduction of DMSs is also discussed in [14,18,21].

    Definition 19. [14,18,21] Assume that (U, \mathcal{A}, d) is a DMS, where \mathcal{A} = \{A_{k}\subseteq A|k\in\mathbb{Z}, 1\leq k \leq m\} . Let \mathcal{H}\subseteq \mathcal{A} and \mathcal{H}\neq \emptyset .

    (1) \mathcal{H} is called a complete optimistic lower consistent set (COL-consistent set) if \underline{\sum\limits_{\mathcal{A}} A_{k}^{O}}([y]_{d}) = \underline{\sum\limits_{\mathcal{H}} A_{k}^{O}}([y]_{d}) for all y\in U . Denote the family of all COL-consistent sets by Cons_{L}^{O}(\mathcal{A}) . Moreover, if \mathcal{H}\in Cons_{L}^{O}(\mathcal{A}) , and \mathcal{H}^{\prime}\not\in Cons_{L}^{O}(\mathcal{A}) whenever \mathcal{H}^{\prime}\subset\mathcal{H} , then \mathcal{H} is a COL-reduct of (U, \mathcal{A}, d) . Denote the family of all COL-reducts of (U, \mathcal{A}, d) by Red_{L}^{O}(\mathcal{A}) , and Core_{L}^{O}(\mathcal{A}) = \bigcap_{\mathcal{H}\in Red_{L}^{O}(\mathcal{A})}\mathcal{H} is called a COL-core.

    (2) \mathcal{H} is called a complete optimistic upper consistent set (COU-consistent set) if \overline{\sum\limits_{\mathcal{A}} A_{k}^{O}}([y]_{d}) = \overline{\sum\limits_{\mathcal{H}} A_{k}^{O}}([y]_{d}) for all y\in U . Denote the family of all COU-consistent sets by Cons_{U}^{O}(\mathcal{A}) . Moreover, if \mathcal{H}\in Cons_{U}^{O}(\mathcal{A}) , and \mathcal{H}^{\prime}\not\in Cons_{U}^{O}(\mathcal{A}) whenever \mathcal{H}^{\prime}\subset\mathcal{H} , then \mathcal{H} is a COU-reduct of (U, \mathcal{A}, d) . Denote the family of all COU-reducts of (U, \mathcal{A}, d) by Red_{U}^{O}(\mathcal{A}) , and Core_{U}^{O}(\mathcal{A}) = \bigcap_{\mathcal{H}\in Red_{U}^{O}(\mathcal{A})}\mathcal{H} is called a COU-core.

    The optimistic multi-granulation reduction of (U, \mathcal{A}, d) is closely associated with the optimistic multi-granulation reduction of the GNDIS (U, \mathcal{N}^{C}, N_{d}) induced by (U, \mathcal{A}, d) .

    Theorem 10. Let (U, \mathcal{A}, d) be a DMS with \mathcal{A} = \{A_{k}\subseteq A|k\in\mathbb{Z}, 1\leq k \leq m\} , which induces the GNDIS (U, \mathcal{N}^{C}, N_{d}) with \mathcal{N}^{C} = \{N_{1}, N_{2}, \; \cdots, N_{m}\} . Then, for \mathcal{H}\subseteq \mathcal{A} , A_{k} \in \mathcal{A} ,

    (1) \mathcal{H}\in Cons_{L}^{O}(\mathcal{A}) \; \Leftrightarrow \; I(\mathcal{H}) \in Cons_{L}^{O}(\mathcal{N}^{C}) .

    (2) \mathcal{H}\in Red_{L}^{O}(\mathcal{A}) \; \Leftrightarrow \; I(\mathcal{H})\in Red_{L}^{O}(\mathcal{N}^{C}) .

    (3) A_{k} \in Core_{L}^{O}(\mathcal{A}) \; \Leftrightarrow \; I(A_{k})\in Core_{L}^{O}(\mathcal{N}^{C}) .

    (4) \mathcal{H}\in Cons_{U}^{O}(\mathcal{A}) \; \Leftrightarrow \; I(\mathcal{H})\in Cons_{U}^{O}(\mathcal{N}^{C}) .

    (5) \mathcal{H}\in Red_{U}^{O}(\mathcal{A}) \; \Leftrightarrow \; I(\mathcal{H})\in Red_{U}^{O}(\mathcal{N}^{C}) .

    (6) A_{k} \in Core_{U}^{O}(\mathcal{A}) \; \Leftrightarrow \; I(A_{k})\in Core_{U}^{O}(\mathcal{N}^{C}) .

    Proof. (1) By Definitions 13 and 19, and Proposition 6,

    \mathcal{H}\in Cons_{L}^{O}(\mathcal{A}) \; \Leftrightarrow \; \underline{\sum\limits_{\mathcal{A}} A_{k}^{O}}([y]_{d}) = \underline{\sum\limits_{\mathcal{H}} A_{k}^{O}}([y]_{d}) for all y\in U

    \Leftrightarrow \; \underline{\sum\limits_{\mathcal{N}^{C}}N_{k}^{O}}(N_{d}(y)) = \underline{\sum\limits_{I(\mathcal{H})} N_{k}^{O}}(N_{d}(y)) \text{ for all } y\in U \\ \Leftrightarrow \; I(\mathcal{H}) \in Cons_{L}^{O}(\mathcal{N}^{C}) .

    (2) and (3). Due to (1) and Definitions 13 and 19, the conclusions are obtained.

    (4)–(6). With reference to the proof of (1)–(3), the conclusions can be proved by Definitions 13 and 19, and Proposition 6.

    In [21], Tan et al. also presented a discenibility matrix to get COU-reducts. However, the optimistic lower reduction was not characterized by discenibility matrices in [21]. We present a discernibility matrix to compute COL-reducts.

    Definition 20. Let (U, \mathcal{A}, d) be a DMS. For each x\in U , define

    MD(x) = \left\{ \begin{array}{rcl} \{A_{k}\in \mathcal{A}||f_{\{A_{k}\}}(x)| = 1\}, & & {x \in \underline{\sum\limits_{\mathcal{A}} A_{k}^{O}}([x]_{d}),} \\ \mathcal{A}, & & {else.} \end{array}\right.

    \mathcal{MD} = \{MD(x)|x\in U\} is called a COL-discernibility matrix.

    For any x, y\in U , define G(x, y) = \{A_{k}\in \mathcal{A}|d(y)\not\in f_{\{A_{k}\}}(x)\} [21]. \mathcal{G} = \{G(x, y)|(x, y)\in U\times U\} is called a COU-discernibility matrix.

    Remark 5. From the proof of Corollary 1, d(y)\not\in f_{\{A_{k}\}}(x)\Leftrightarrow [x]_{A_{k}}\cap [y]_{d} = \emptyset . If \overline{\sum\limits_{\mathcal{A}} A_{k}^{O}}([y]_{d}) = U for all y\in U , then [x]_{A_{k}}\cap [y]_{d}\neq\emptyset for all x\in U and A_{k}\in \mathcal{A} . It follows that G(x, y) = \emptyset for all x, y\in U . Hence, we cannot get the COU-reducts from \mathcal{G} . Then, G(x, y) is defined by

    G(x,y) = \left\{ \begin{array}{rcl} \{A_{k}\in \mathcal{A}|d(y)\not\in f_{\{A_{k}\}}(x)\}, & & {x\not\in \overline{\sum\limits_{\mathcal{A}} A_{k}^{O}}([y]_{d}),} \\ \mathcal{A}, & & {else.} \end{array}\right.

    Corollary 2. For any \mathcal{H}\subseteq \mathcal{A} , A_{k}\in \mathcal{A} ,

    (1) \mathcal{H}\in Cons_{L}^{O}(\mathcal{A}) \; \Leftrightarrow \; \mathcal{H}\cap MD(x)\neq \emptyset for all MD(x)\in \mathcal{MD} .

    (2) \mathcal{H}\in Red_{L}^{O}(\mathcal{A}) \; \Leftrightarrow \; \mathcal{H}\cap MD(x)\neq \emptyset for all MD(x)\in \mathcal{MD} , and for every \mathcal{H}_{0}\subset \mathcal{H} , there exists an MD(x)\in \mathcal{MD} such that MD(x)\cap \mathcal{H}_{0} = \emptyset .

    (3) A_{k}\in Core _{L}^{O}(\mathcal{A}) \; \Leftrightarrow \; \exists x\in U , MD(x) = \{A_{k}\} .

    (4) \mathcal{H}\in Cons_{U}^{O}(\mathcal{A}) \; \Leftrightarrow \; \mathcal{H}\cap G(x, y)\neq \emptyset for all G(x, y)\in \mathcal{G} [21].

    (5) \mathcal{H}\in Red_{U}^{O}(\mathcal{A}) \; \Leftrightarrow \; \mathcal{H}\cap G(x, y)\neq \emptyset for all G(x, y)\in\mathcal{G} , and for every \mathcal{H}_{0}\subset \mathcal{H} , there exists a G(x, y)\in\mathcal{G} such that G(x, y)\cap \mathcal{H}_{0} = \emptyset [21].

    (6) A_{k}\in Core _{U}^{O}(\mathcal{A}) \; \Leftrightarrow there exists a (x, y)\in U\times U such that G(x, y) = \{A_{k}\} [21].

    Proof. (1)–(3). By Theorem 10, \mathcal{H} is a COL-consistent set (or COL-reduct) of (U, \mathcal{A}, d) \; \Leftrightarrow \; I(\mathcal{H}) is a GNOL-consistent set (or GNOL-reduct) of (U, \mathcal{N}^{C}, N_{d}) . For any x\in U, A_{k}\in \mathcal{A} , |f_{\{A_{k}\}}(x)| = 1 if N_{\{A_{k}\}}(x)\subseteq [x]_{d} . It follows from Definitions 14 and 20 that I(MD(x)) = GD_{L}^{O}(x, N_{d}(x)) for all x\in U .

    Due to Theorem 5, \mathcal{H}\in Cons_{L}^{O}(\mathcal{A}) \; \Leftrightarrow \; I(\mathcal{H}) \in Cons_{L}^{O}(\mathcal{N}^{C}) \; \Leftrightarrow \; I(\mathcal{H}) \cap GD_{L}^{O}(x, N_{d}(x))\neq\emptyset for every x\in U \; \Leftrightarrow \; \mathcal{H} \cap MD(x)\neq\emptyset for every MD(x)\in \mathcal{MD} . Hence (1) is obtained. (2) and (3) can be deduced from Theorem 5 analogously.

    (4)–(6). According to Theorem 10, \mathcal{H} is a COU-consistent set (or COU-reduct) of (U, \mathcal{A}, d) \; \Leftrightarrow \; I(\mathcal{H}) is a GNOU-consistent set (or GNOU-reduct) of (U, \mathcal{N}^{C}, N_{d}) .

    For any x, y\in U , if x\not\in \overline{\sum\limits_{\mathcal{A}} A_{k}^{O}}([y]_{d}) = \overline{\sum\limits_{\mathcal{N}^{C}} N_{k}^{O}}(N_{d}(y)) , then

    I(G(x,y)) = I(\{A_{k}\in \mathcal{A}|d(y)\not\in f_{\{A_{k}\}}(x)\}) \\ = I(\{A_{k}\in \mathcal{A}|[y]_{d}\cap [x]_{A_{k}} = \emptyset\}) \\ = \{N_{k}\in \mathcal{N}^{C}|N_{d}(y)\cap N_{k}(x) = \emptyset\} \\ = GD_{U}^{O}(x,N_{d}(y)) .

    If x\in \overline{\sum\limits_{\mathcal{A}} A_{k}^{O}}([y]_{d}) = \overline{\sum\limits_{\mathcal{N}^{C}} N_{k}^{O}}(N_{d}(y)) , I(G(x, y)) = \mathcal{N}^{C} = GD_{U}^{O}(x, N_{d}(y)) . We can conclude that I(G(x, y)) = GD_{U}^{O}(x, N_{d}(y)) for all x, y\in U .

    By Theorem 7, \mathcal{H}\in Cons_{U}^{O}(\mathcal{A}) \; \Leftrightarrow \; I(\mathcal{H})\in Cons_{U}^{O}(\mathcal{N}^{C}) \; \Leftrightarrow \; I(\mathcal{H}) \cap GD_{U}^{O}(x, N_{d}(y))\neq\emptyset for all x, y\in U \; \Leftrightarrow \; I(\mathcal{H}) \cap I(G(x, y))\neq\emptyset for each G(x, y)\in \mathcal{G} \; \Leftrightarrow \; \mathcal{H} \cap G(x, y)\neq\emptyset for each G(x, y)\in \mathcal{G} . Hence, (4) is proved. (5) and (6) are also proved by Theorem 7.

    We can see that a DMS is a GNDIS. Furthermore, due to Definition 10 and Theorem 2, a CPL-reduct can be obtained by a prime implicant of f(\mathcal{P}) . According to Definition 12 and Theorem 4, a CPU-reduct can be obtained by a prime implicant of f(\mathcal{Q}) . By Definition 15 and Theorem 6, the COL-reducts can be found from the prime implicants of f(\mathcal{MD}) . By Definition 17 and Theorem 8, the COU-reducts can be found from the prime implicants of f(\mathcal{G}) .

    Example 7. A DMS (U, \mathcal{A}, d) is present in Table 3, where \mathcal{A} = \{A_{1} = \{a_{1}, a_{2}, a_{3}\}, A_{2} = \{a_{4}, a_{5}\}, A_{3} = \{a_{6}, a_{7}\}, A_{4} = \{a_{8}, a_{9}, a_{10}\}\} . The granulars [x_{i}]_{A_{k}}, [x_{i}]_{d} , and N_{\mathcal{A}}(x_{i}) \; (i = 1, \cdots, 6; k = 1, \cdots, 4) are presented in Table 4. We obtain

    \mathcal{Q} = \left ( \begin{array}{c|cccccc} Q(x_{i},x_{j}) & x_{1} & x_{2} & x_{3} & x_{4} & x_{5} & x_{6} \\ \hline x_{1} & \mathcal{A} & \{A_{1},A_{2},A_{3}\} & \{A_{3}\} & \mathcal{A} & \mathcal{A} & \{A_{3},A_{4}\} \\ x_{2} & \{A_{3}\} & \mathcal{A} & \mathcal{A} & \{A_{3}\} & \{A_{1},A_{2},A_{3}\} & \{A_{1},A_{2},A_{3}\} \\ x_{3} & \{A_{3}\} & \mathcal{A} & \mathcal{A} & \{A_{3}\} & \{A_{1},A_{2},A_{3}\} & \{A_{1},A_{2},A_{3}\} \\ x_{4} & \mathcal{A} & \{A_{1},A_{2},A_{3}\} & \{A_{3}\} & \mathcal{A} & \mathcal{A} & \{A_{3},A_{4}\} \\ x_{5} & \mathcal{A} & \{A_{1},A_{2},A_{3}\} & \{A_{3}\} & \mathcal{A} & \mathcal{A} & \{A_{3},A_{4}\} \\ x_{6} & \{A_{3}\} & \emptyset & \{A_{1},A_{2},A_{3}\} & \emptyset & \{A_{4}\} & \mathcal{A} \\ \end{array} \right).
    Table 3.  A DMS.
    A_{1} A_{2} A_{3} A_{4} d
    a_{1} a_{2} a_{3} a_{4} a_{5} a_{6} a_{7} a_{8} a_{9} a_{10}
    x_{1} 2 2 1 2 1 3 1 0 2 1 1
    x_{2} 1 2 2 1 1 1 2 0 3 2 2
    x_{3} 2 0 3 1 2 3 1 1 3 2 2
    x_{4} 2 1 1 2 1 1 2 0 2 1 1
    x_{5} 1 2 2 1 1 1 2 1 2 1 1
    x_{6} 2 0 3 1 2 3 1 1 2 1 3

     | Show Table
    DownLoad: CSV
    Table 4.  The granulars of elements in Example 6.
    \ast x_{1} x_{2} x_{3} x_{4} x_{5} x_{6}
    [x_{i}]_{A_{1}} \{x_{1}\} \{x_{2}, x_{5}\} \{x_{3}, x_{6}\} \{x_{4}\} \{x_{2}, x_{5}\} \{x_{3}, x_{6}\}
    [x_{i}]_{A_{2}} \{x_{1}, x_{4}\} \{x_{2}, x_{5}\} \{x_{3}, x_{6}\} \{x_{1}, x_{4}\} \{x_{2}, x_{5}\} \{x_{3}, x_{6}\}
    [x_{i}]_{A_{3}} \{x_{1}, x_{3}, x_{6}\} \{x_{2}, x_{4}, x_{5}\} \{x_{1}, x_{3}, x_{6}\} \{x_{2}, x_{4}, x_{5}\} \{x_{2}, x_{4}, x_{5}\} \{x_{1}, x_{3}, x_{6}\}
    [x_{i}]_{A_{4}} \{x_{1}, x_{4}\} \{x_{2}\} \{x_{3}\} \{x_{1}, x_{4}\} \{x_{5}, x_{6}\} \{x_{5}, x_{6}\}
    [x_{i}]_{d} \{x_{1}, x_{4}, x_{5}\} \{x_{2}, x_{3}\} \{x_{2}, x_{3}\} \{x_{1}, x_{4}, x_{5}\} \{x_{1}, x_{4}, x_{5}\} \{x_{6}\}
    N_{\mathcal{A}}(x_{i}) \{x_{1}, x_{3}, x_{4}, x_{6}\} \{x_{2}, x_{4}, x_{5}\} \{x_{1}, x_{3}, x_{6}\} \{x_{1}, x_{2}, x_{4}, x_{5}\} \{x_{2}, x_{4}, x_{5}, x_{6}\} \{x_{1}, x_{3}, x_{5}, x_{6}\}

     | Show Table
    DownLoad: CSV

    By Corollary 1, Core_{U}^{P}(\mathcal{A}) = \{A_{3}, A_{4}\} , and \mathcal{A}_{0} = \{A_{3}, A_{4}\} is the one and only one CPU-reduct.

    The DMS (U, \mathcal{A}, d) induces a GNDIS (U, \mathcal{N}^{C}, N_{d}) with \mathcal{N}^{C} = \{N_{1}, N_{2} , N_{3}, N_{4}\} , where N_{i}(x) = [x]_{A_{i}}(i = 1, 2, 3, 4) and N_{d}(x) = [x]_{d} for all x\in U . By Definition 11, the GNPU-D matrix of (U, \mathcal{N}^{C}, N_{d}) is

    \mathcal{GD}_{U}^{P} = \left ( \begin{array}{c|ccc} GD_{U}^{P}(x_{i}, N_{d}(x_{j})) & N_{d}(x_{1}) = N_{d}(x_{4}) = N_{d}(x_{5}) & N_{d}(x_{2}) = N_{d}(x_{3}) & N_{d}(x_{6}) \\ \hline x_{1} & \mathcal{N}^{C} & \{N_{3}\} & \{N_{3}\} \\ x_{2} & \{N_{1},N_{2},N_{3}\} & \mathcal{N}^{C} & \emptyset \\ x_{3} & \{N_{3}\} & \mathcal{N}^{C} & \{N_{1},N_{2},N_{3}\} \\ x_{4} & \mathcal{N}^{C} & \{N_{3}\} & \emptyset \\ x_{5} & \mathcal{N}^{C} & \{N_{1},N_{2},N_{3}\} & \{N_{4}\} \\ x_{6} & \{N_{3},N_{4}\} & \{N_{1},N_{2},N_{3}\} & \mathcal{N}^{C} \\ \end{array} \right).

    By Theorem 1, the GNPU-reduct of (U, \mathcal{N}^{C}, N_{d}) is \mathcal{N}_{0} = \{N_{3}, N_{4}\} , and Core_{L}^{P}(\mathcal{N}^{C}) = \{N_{3}, N_{4}\} . It is easy to get that I(Q(x_{j}, x_{i})) = GD_{U}^{P}(x_{i}, N_{d}(x_{j})) . Moreover, I(\mathcal{A}_{0}) = \mathcal{N}_{0} and I(Core_{U}^{P}(\mathcal{A})) = Core_{U}^{P}(\mathcal{N}^{C}) .

    By Definition 2, \underline{\sum\limits_{\mathcal{A}} A_{k}^{O}}(\{x_{1}, x_{4}, x_{5}\}) = \{x_{1}, x_{4}\} , \underline{\sum\limits_{\mathcal{A}} A_{k}^{O}}(\{x_{2}, x_{3}\}) = \{x_{2}, x_{3}\} , \underline{\sum\limits_{\mathcal{A}} A_{k}^{O}}(\{x_{6}\}) = \emptyset . According to Definition 20, we get

    \mathcal{MD} = \left ( \begin{array}{c|ccc} x_{i} & MD(x_{i}) \\ \hline x_{1} & \{A_{1},A_{2},A_{4}\} \\ x_{2} & \{A_{4}\} \\ x_{3} & \{A_{4}\} \\ x_{4} & \{A_{1},A_{2},A_{4}\} \\ x_{5} & \emptyset \\ x_{6} & \emptyset \\ \end{array} \right).

    Due to Corollary 2, Core _{L}^{O}(\mathcal{A}) = \{A_{4}\} , and \{A_{4}\} is the only COL-reduct of (U, \mathcal{A}, d) .

    In an IDIS (U, A, d) , \mathcal{A}^{I} = \{A_{k}\subseteq A|k\in\mathbb{Z}, 1\leq k \leq m\} , a mapping N_{k}:U\rightarrow \mathcal{P}(U) by N_{k}(x) = S_{A_{k}}(x) for all x\in U \; (k = 1, 2, \cdots, m) and a mapping N_{d}:U\rightarrow \mathcal{P}(U) by N_{d}(y) = [y]_{d} for all y\in U . Then, N_{k}(k = 1, 2, \cdots, m) is a reflexive neighborhood operator on U and N_{d} is a Pawlak neighborhood operator, and we get a GNDIS (U, \mathcal{N}^{I}, N_{d}) with \mathcal{N}^{I} = \{N_{1}, N_{2}, \; \cdots, N_{m}\} , which is a GNDIS induced by the IDIS (U, A, d) . Define a mapping I:\mathcal{A}^{I}\rightarrow \mathcal{N}^{I} by I(A_{k}) = N_{k} for all A_{k}\in \mathcal{A}^{I} . From the definition of \mathcal{N}^{I} , it is easy to get that I is a bijection.

    Proposition 7. Suppose that (U, A, d) is an IDIS and \mathcal{A}^{I} = \{A_{k}\subseteq A|k\in\mathbb{Z}, 1\leq k \leq m\} , which induces the GNDIS (U, \mathcal{N}^{I}, N_{d}) with \mathcal{N}^{I} = \{N_{1}, N_{2}, \; \cdots, N_{m}\} . For each X\subseteq U ,

    \underline{\sum\limits_{\mathcal{A}^{I}} A_{k}^{P}}(X) = \underline{\sum\limits_{\mathcal{N}^{I}}N_{k}^{P}}(X) , \overline{\sum\limits_{\mathcal{A}^{I}} A_{k}^{P}}(X) = \overline{\sum\limits_{\mathcal{N}^{I}} N_{k}^{P}}(X) ,

    \underline{\sum\limits_{\mathcal{A}^{I}} A_{k}^{O}}(X) = \underline{\sum\limits_{\mathcal{N}^{I}} N_{k}^{O}}(X) , \overline{\sum\limits_{\mathcal{A}^{I}} A_{k}^{O}}(X) = \overline{\sum\limits_{\mathcal{N}^{I}} N_{k}^{O}}(X) .

    Proof. It is directly by Definitions 3, 6 and 7.

    By Proposition 7, the generalized neighborhood multi-granulation rough set models are extension models of the multi-granulation rough set models proposed in [15].

    Pessimistic multi-granulation reduction of an IDIS was discussed by Qian et. al. [14,18,32].

    Definition 21. [14,18,32] Given an IDIS (U, A, d) , let \mathcal{A}^{I} = \{A_{k}\subseteq A|k\in\mathbb{Z}, 1\leq k \leq m\} and \mathcal{H}\subseteq \mathcal{A}^{I} .

    (1) \mathcal{A}^{I} is called an incomplete pessimistic lower consistent set (IPL-consistent set) if \underline{\sum\limits_{\mathcal{A}^{I}} A_{k}^{P}}([y]_{d}) = \underline{\sum\limits_{\mathcal{H}} A_{k}^{P}}([y]_{d}) for all y\in U . Denote the family of all IPL-consistent sets as Cons_{L}^{P}(\mathcal{A}^{I}) . Moreover, if \mathcal{H}\in Cons_{L}^{P}(\mathcal{A}^{I}) , and \mathcal{H}^{\prime}\not\in Cons_{L}^{P}(\mathcal{A}^{I}) whenever \mathcal{H}^{\prime}\subset\mathcal{H} , then \mathcal{H} is an IPL-reduct. Denote the family of all IPL-reducts of (U, A, d) by Red_{L}^{P}(\mathcal{A}^{I}) , and Core_{L}^{P}(\mathcal{A}^{I}) = \bigcap_{\mathcal{H}\in Red_{L}^{P}(\mathcal{A}^{I})}\mathcal{H} is said to be an IPL-core.

    (2) \mathcal{A}^{I} is called an incomplete pessimistic upper consistent set (IPU-consistent set) if \overline{\sum\limits_{\mathcal{A}^{I}} A_{k}^{P}}([y]_{d}) = \overline{\sum\limits_{\mathcal{H}} A_{k}^{P}}([y]_{d}) for all y\in U . Denote the family of all IPU-consistent sets by Cons_{U}^{P}(\mathcal{A}^{I}) . Moreover, if \mathcal{H}\in Cons_{U}^{P}(\mathcal{A}^{I}) , and \mathcal{H}^{\prime}\not\in Cons_{U}^{P}(\mathcal{A}^{I}) whenever \mathcal{H}^{\prime}\subset\mathcal{H} , then \mathcal{H} is an IPU-reduct. Denote the family of all IPU-reducts of (U, A, d) by Red_{U}^{P}(\mathcal{A}^{I}) , and Core_{U}^{P}(\mathcal{A}^{I}) = \bigcap_{\mathcal{H}\in Red_{U}^{P}(\mathcal{A}^{I})}\mathcal{H} is said to be an IPU-core.

    The multi-granulation reduction of an IDIS (U, A, d) can be changed into the multi-granulation reduction of the GNDIS (U, \mathcal{N}^{I}, N_{d}) induced by (U, A, d) .

    Theorem 11. Consider an IDIS (U, A, d) with \mathcal{A}^{I} = \{A_{k}\subseteq A|k\in\mathbb{Z}, 1\leq k \leq m\} , which induces a GNDIS (U, \mathcal{N}^{I}, N_{d}) with \mathcal{N}^{I} = \{N_{1}, N_{2}, \; \cdots, N_{m}\} . Then, for \mathcal{H}\subseteq \mathcal{A} , A_{k} \in \mathcal{A} ,

    (1) \mathcal{H}\in Cons_{L}^{P}(\mathcal{A}^{I}) \; \Leftrightarrow \; I(\mathcal{H}) \in Cons_{L}^{P}(\mathcal{N}^{I}) .

    (2) \mathcal{H}\in Red_{L}^{P}(\mathcal{A}^{I}) \; \Leftrightarrow \; I(\mathcal{H})\in Red_{L}^{P}(\mathcal{N}^{I}) .

    (3) A_{k} \in Core_{L}^{P}(\mathcal{A}^{I}) \; \Leftrightarrow \; I(A_{k})\in Core_{L}^{P}(\mathcal{N}^{I}) .

    (4) \mathcal{H}\in Cons_{U}^{P}(\mathcal{A}^{I}) \; \Leftrightarrow \; I(\mathcal{H}) \in Cons_{U}^{P}(\mathcal{N}^{I}) .

    (5) \mathcal{H}\in Red_{U}^{P}(\mathcal{A}^{I}) \; \Leftrightarrow \; I(\mathcal{H})\in Red_{U}^{P}(\mathcal{N}^{I}) .

    (6) A_{k} \in Core_{U}^{P}(\mathcal{A}^{I}) \; \Leftrightarrow \; I(A_{k})\in Core_{U}^{P}(\mathcal{N}^{I}) .

    Proof. (1) Due to Definitions 8 and 21, and Proposition 7,

    \mathcal{H}\in Cons_{L}^{P}(\mathcal{A}^{I}) \; \Leftrightarrow \; \underline{\sum\limits_{\mathcal{A}^{I}} A_{k}^{P}}([y]_{d}) = \underline{\sum\limits_{\mathcal{H}} A_{k}^{P}}([y]_{d}) for all y\in U

    \Leftrightarrow \;\underline{\sum\limits_{\mathcal{N}^{I}}N_{k}^{P}}(N_{d}(y)) = \underline{\sum\limits_{I(\mathcal{H})} N_{k}^{P}}(N_{d}(y)) \text{ for all } y\in U \\ \Leftrightarrow \;I(\mathcal{H}) \in Cons_{L}^{P}(\mathcal{N}^{I}).

    (2) and (3). By (1) and Definitions 8 and 21, the conclusions are proved.

    (4)–(6). Similar to the proof of (1)–(3), the conclusions can be obtained by Definitions 13 and 19, and Proposition 6.

    Discernibility matrices were defined by Zhang et al. [32] to compute the IPL-reducts and IPU-reducts of an IDIS. For any \mathcal{H}\subseteq \mathcal{A}^{I} , define the decision function by h_{\mathcal{H}}(x_{i}) = \{d(x_{j})|x_{j}\in IN_{\mathcal{H}}(x_{i})\} . For any x\in U , define

    IP(x) = \left\{ \begin{array}{rcl} \{A_{k}\in \mathcal{A}^{I}||h_{\{A_{k}\}}(x)| > 1 \}, & & {|h_{\mathcal{A}^{I}}(x)| > 1 ,} \\ \emptyset, & & {|h_{\mathcal{A}^{I}}(x)| = 1.} \end{array}\right.

    \mathcal{IP} = \{IP(x)|x\in U\} is called an IPL-discernibility matrix. For (x, y)\in U\times U , define

    IQ(x,y) = \left\{ \begin{array}{rcl} \{A_{k}\in \mathcal{A}^{I}|d(y)\in h_{\{A_{k}\}}(x)\}, & & {d(y)\in h_{\mathcal{A}^{I}}(x),} \\ \emptyset, & & {d(y)\not\in h_{\mathcal{A}^{I}}(x),} \end{array}\right.

    \mathcal{IQ} = \{IQ(x, y)|(x, y)\in U\times U\} is called an IPU-discernibility matrix.

    Remark 6. If |h_{\mathcal{A}^{I}}(x)| = 1 for each x\in U in an IDIS (U, A, d) with \mathcal{A}^{I} = \{A_{k}\subseteq A|k\in\mathbb{Z}, 1\leq k \leq m\} , then each A_{k}\in \mathcal{A}^{I} is an IPL-reduct. However, IP(x) = \emptyset for all x\in U . In the following, IP(x) is defined by

    IP(x) = \left\{ \begin{array}{rcl} \{A_{k}\in \mathcal{A}^{I}||h_{\{A_{k}\}}(x)| > 1 \}, & & {|h_{\mathcal{A}^{I}}(x)| > 1 ,} \\ \mathcal{A}^{I}, & & {|h_{\mathcal{A}^{I}}(x)| = 1.} \end{array}\right.

    By Definition 9 and Theorem 1, as well as Definition 11 and Theorem 3, we obtain

    Corollary 3. [32] Assume that (U, A, d) is an IDIS and \mathcal{A}^{I} = \{A_{k}\subseteq A|k\in\mathbb{Z}, 1\leq k \leq m\} . For any \mathcal{H}\subseteq \mathcal{A}^{I} , A_{k}\in\mathcal{A}^{I} ,

    (1) \mathcal{H}\in Cons_{L}^{P}(\mathcal{A}^{I}) \; \Leftrightarrow \; \mathcal{H}\cap IP(x)\neq \emptyset for all IP(x)\in \mathcal{IP} .

    (2) \mathcal{H}\in Red_{L}^{P}(\mathcal{A}^{I}) \; \Leftrightarrow \; \mathcal{H}\cap IP(x)\neq \emptyset for all IP(x)\in \mathcal{IP} , and for each \mathcal{H}_{0}\subset \mathcal{H} , there exists an IP(x)\in \mathcal{IP} such that IP(x)\cap \mathcal{H}_{0} = \emptyset .

    (3) A_{k}\in Core _{L}^{P} ( \mathcal{A}^{I} ) \Leftrightarrow \; \exists x\in U , IP(x) = \{A_{k}\} .

    (4) \mathcal{H}\in Cons_{U}^{P}(\mathcal{A}^{I}) \; \Leftrightarrow \; \mathcal{H}\cap IQ(x, y)\neq \emptyset for all IQ(x, y)\neq \emptyset .

    (5) \mathcal{H}\in Red_{U}^{P}(\mathcal{A}^{I}) \; \Leftrightarrow \; \mathcal{H}\cap IQ(x, y)\neq \emptyset for all IQ(x, y)\neq \emptyset , and for each \mathcal{H}_{0}\subset \mathcal{H} , there exists an IQ(x, y) such that IQ(x, y)\cap \mathcal{H}_{0} = \emptyset .

    (6) A_{k}\in Core _{U}^{P} ( \mathcal{A}^{I} ) \Leftrightarrow there exist some (x, y)\in U\times U such that IQ(x, y) = \{A_{k}\} .

    Proof. (1)–(3). By Theorem 11, \mathcal{H} is an IPL-consistent set (or an IPL-reduct) of (U, A, d) \; \Leftrightarrow \; I(\mathcal{H}) is a GNPL-consistent set (or a GNPL-reduct) of (U, \mathcal{N}^{I}, N_{d}) . According to Property 3 in [32], for any x\in U , \mathcal{H}\subseteq \mathcal{A}^{I} , |h_{\mathcal{H}}(x)| > 1 if IN_{\mathcal{H}}(x)\not\subseteq [x]_{d} . Then, I(IP(x)) = GD_{L}^{P}(x, N_{d}(x)) .

    By Theorem 1, \mathcal{H}\in Cons_{L}^{P}(\mathcal{A}^{I}) \; \Leftrightarrow \; I(\mathcal{H}) \in Cons_{L}^{P}(\mathcal{N}^{I}) \; \Leftrightarrow \; I(\mathcal{H}) \cap I(IP(x))\neq\emptyset for each x\in U \; \Leftrightarrow \; \mathcal{H} \cap IP(x)\neq\emptyset for each IP(x)\in \mathcal{IP} .

    Similar to the proof of (1), we can get (2) and (3) by Theorem 1.

    (4)–(6). According to Theorem 11, \mathcal{H} is an IPU-consistent set (or an IPU-reduct) of (U, A, d) \; \Leftrightarrow \; I(\mathcal{H}) is a GNPU-consistent set (or a GNPU-reduct) of (U, \mathcal{N}^{I}, N_{d}) .

    For x, y\in U , d(y)\in h_{\{A_{k}\}}(x)\Leftrightarrow S_{A_{k}}(x)\cap [y]_{d}\neq\emptyset , and d(y)\in h_{\mathcal{A}^{I}}(x)\Leftrightarrow IN_{\mathcal{A}^{I}}(x)\cap [y]_{d}\neq\emptyset . Then,

    IQ(x,y) = \left\{ \begin{array}{rcl} \{A_{k}\in \mathcal{A}^{I}|S_{A_{k}}(x)\cap [y]_{d}\neq\emptyset\}, & & {IN_{\mathcal{A}^{I}}(x)\cap [y]_{d}\neq\emptyset,} \\ \emptyset, & & {IN_{\mathcal{A}^{I}}(x)\cap [y]_{d} = \emptyset.} \end{array}\right.

    According to Definition 11, I(IQ(x, y)) = GD_{U}^{P}(x, N_{d}(y)) . By Theorem 3, (4)–(6) hold.

    Optimistic multi-granulation reduction of IDIS was discussed by Qian et. al. [14,18,32].

    Definition 22. [14,18,32] Given an IDIS (U, A, d) , let \mathcal{A}^{I} = \{A_{k}\subseteq A|k\in\mathbb{Z}, 1\leq k \leq m\} and \mathcal{H}\subseteq \mathcal{A}^{I} .

    (1) \mathcal{A}^{I} is called an incomplete optimistic lower consistent set (IOL-consistent set) if \underline{\sum\limits_{\mathcal{A}^{I}} A_{k}^{O}}([y]_{d}) = \underline{\sum\limits_{\mathcal{H}} A_{k}^{O}}([y]_{d}) for all y\in U . Denote the family of all IOL-consistent sets as Cons_{L}^{O}(\mathcal{A}^{I}) . Moreover, if \mathcal{H}\in Cons_{L}^{O}(\mathcal{A}^{I}) , and \mathcal{H}^{\prime}\not\in Cons_{L}^{O}(\mathcal{A}^{I}) whenever \mathcal{H}^{\prime}\subset\mathcal{H} , then \mathcal{H} is an IOL-reduct. Denote the family of all IOL-reducts of (U, A, d) by Red_{L}^{O}(\mathcal{A}^{I}) , and Core_{L}^{O}(\mathcal{A}^{I}) = \bigcap_{\mathcal{H}\in Red_{L}^{O}(\mathcal{A}^{I})}\mathcal{H} is said to be an IOL-core.

    (2) \mathcal{A}^{I} is called an incomplete optimistic upper consistent set (IOU-consistent set) if \overline{\sum\limits_{\mathcal{A}^{I}} A_{k}^{O}}([y]_{d}) = \overline{\sum\limits_{\mathcal{H}} A_{k}^{O}}([y]_{d}) for all x\in U . Denote the family of all IOU-consistent sets as Cons_{U}^{O}(\mathcal{A}^{I}) . Moreover, if \mathcal{H}\in Cons_{U}^{O}(\mathcal{A}^{I}) , and \mathcal{H}^{\prime}\not\in Cons_{U}^{O}(\mathcal{A}^{I}) whenever \mathcal{H}^{\prime}\subset\mathcal{H} , then \mathcal{H} is an IOU-reduct. Denote the family of all IOU-reducts of (U, A, d) by Red_{U}^{O}(\mathcal{A}^{I}) , and Core_{U}^{O}(\mathcal{A}^{I}) = \bigcap_{\mathcal{H}\in Red_{U}^{O}(\mathcal{A}^{I})}\mathcal{H} is said to be an IOU-core.

    The multi-granulation reduction of an IDIS (U, A, d) can be changed into the multi-granulation reduction of the GNDIS (U, \mathcal{N}^{I}, N_{d}) induced by (U, A, d) .

    Theorem 12. Consider an IDIS (U, A, d) with \mathcal{A}^{I} = \{A_{k}\subseteq A|k\in\mathbb{Z}, 1\leq k \leq m\} , which induces a GNDIS (U, \mathcal{N}^{I}, N_{d}) with \mathcal{N}^{I} = \{N_{1}, N_{2}, \; \cdots, N_{m}\} . Then, for \mathcal{H}\subseteq \mathcal{A} , A_{k} \in \mathcal{A} ,

    (1) \mathcal{H}\in Cons_{L}^{O}(\mathcal{A}^{I}) \; \Leftrightarrow \; I(\mathcal{H}) \in Cons_{L}^{O}(\mathcal{N}^{I}) .

    (2) \mathcal{H}\in Red_{L}^{O}(\mathcal{A}^{I}) \; \Leftrightarrow \; I(\mathcal{H})\in Red_{L}^{O}(\mathcal{N}^{I}) .

    (3) A_{k} \in Core_{L}^{O}(\mathcal{A}^{I}) \; \Leftrightarrow \; I(A_{k})\in Core_{L}^{O}(\mathcal{N}^{I}) .

    (4) \mathcal{H}\in Cons_{U}^{O}(\mathcal{A}^{I}) \; \Leftrightarrow \; I(\mathcal{H}) \in Cons_{U}^{O}(\mathcal{N}^{I}) .

    (5) \mathcal{H}\in Red_{U}^{O}(\mathcal{A}^{I}) \; \Leftrightarrow \; I(\mathcal{H})\in Red_{U}^{O}(\mathcal{N}^{I}) .

    (6) A_{k} \in Core_{U}^{O}(\mathcal{A}^{I}) \; \Leftrightarrow \; I(A_{k})\in Core_{U}^{O}(\mathcal{N}^{I}) .

    Proof. It is verified according to Proposition 7, Definitions 13 and 22.

    However, the optimistic multi-granulation reduction of IDISs was not considered in [32]. In the following, we present two discernibility matrices to characterize the IOL-reducts and IOU-reducts of an IDIS.

    Definition 23. Given an IDIS (U, A, d) , let \mathcal{A}^{I} = \{A_{k}\subseteq A|k\in\mathbb{Z}, 1\leq k \leq m\} . For any x\in U , define

    ID_{L}^{O}(x,[x]_{d}) = \left\{ \begin{array}{rcl} \{A_{k}\in \mathcal{A}^{I}|S_{A_{k}}(x)\subseteq [x]_{d}\}, & & {x \in \underline{\sum\limits_{\mathcal{A}^{I}} A_{k}^{O}}([x]_{d}),} \\ \mathcal{A}^{I}, & & {else.} \end{array}\right.

    \mathcal{ID}_{L}^{O} = \{ID_{L}^{O}(x, [x]_{d})|x\in U\} is called an IOL-discernibility matrix. For any x\in U, [y]_{d}\in U/R_{d} , define

    ID_{U}^{O}(x,[y]_{d}) = \left\{ \begin{array}{rcl} \{A_{k}\in \mathcal{A}^{I}|S_{A_{k}}(x)\cap [y]_{d} = \emptyset\}, & & {x \not\in \overline{\sum\limits_{\mathcal{A}^{I}} A_{k}^{O}}([y]_{d}),} \\ \mathcal{A}^{I}, & & {else.} \end{array}\right.

    \mathcal{ID}_{U}^{O} = \{ID_{U}^{O}(x, [y]_{d})|x\in U, [y]_{d}\in U/R_{d}\} is called an IOU-discernibility matrix.

    Corollary 4. Suppose that (U, A, d) is an IDIS and \mathcal{A}^{I} = \{A_{k}\subseteq A|k\in\mathbb{Z}, 1\leq k \leq m\} . For any \mathcal{H}\subseteq \mathcal{A}^{I} , A_{k}\in \mathcal{A}^{I} ,

    (1) \mathcal{H}\in Cons_{L}^{O}(\mathcal{A}^{I}) \; \Leftrightarrow \; \mathcal{H}\cap ID_{L}^{O}(x, [x]_{d})\neq \emptyset for all ID_{L}^{O}(x, [x]_{d})\in \mathcal{ID}_{L}^{O} .

    (2) \mathcal{H}\in Red_{L}^{O}(\mathcal{A}^{I}) \; \Leftrightarrow \; \mathcal{H}\cap ID_{L}^{O}(x, [x]_{d})\neq \emptyset for all ID_{L}^{O}(x, [x]_{d})\in \mathcal{ID}_{L}^{O} , and for any \mathcal{H}_{0}\subset \mathcal{H} , there exists an ID_{L}^{O}(x, [x]_{d})\in \mathcal{ID}_{L}^{O} such that ID_{L}^{O}(x, [x]_{d})\cap \mathcal{H}_{0} = \emptyset .

    (3) A_{k}\in Core _{L}^{O} ( \mathcal{A}^{I} ) \Leftrightarrow there is some x\in U such that ID_{L}^{O}(x, [x]_{d}) = \{A_{k}\} .

    (4) \mathcal{H}\in Cons_{U}^{O}(\mathcal{A}^{I}) \; \Leftrightarrow \; \mathcal{H}\cap ID_{U}^{O}(x, [y]_{d})\neq \emptyset for any ID_{U}^{O}(x, [y]_{d})\in \mathcal{ID}_{U}^{O} .

    (5) \mathcal{H}\in Red_{U}^{O}(\mathcal{A}^{I}) \; \Leftrightarrow \; \mathcal{H}\cap ID_{U}^{O}(x, [y]_{d})\neq \emptyset for any ID_{U}^{O}(x, [y]_{d})\in \mathcal{ID}_{U}^{O} , and for each \mathcal{H}_{0}\subset \mathcal{H} , there exists an ID_{U}^{O}(x, [y]_{d})\in \mathcal{ID}_{U}^{O} such that ID_{U}^{O}(x, y)\cap \mathcal{H}_{0} = \emptyset .

    (6) A_{k}\in Core _{U}^{O} ( \mathcal{A}^{I} ) \Leftrightarrow there exist x\in U, [y]_{d}\in U/R_{d} such that ID_{U}^{O}(x, [y]_{d}) = \{A_{k}\} .

    Proof. By Theorem 12, \mathcal{H} is an IOL-consistent set (or IOL-reduct, IOU-consistent set, IOU-reduct, respectively) of an IDIS (U, A, d) \; \Leftrightarrow \; I(\mathcal{H}) is a GNOL-consistent set (or GNOL-reduct, GNOU-consistent set, GNOU-reduct, respectively) of the GNDIS (U, \mathcal{N}^{I}, N_{d}) .

    By Definitions 14 and 23, I(ID_{L}^{O}(x, [x]_{d})) = GD_{L}^{O}(x, N_{d}(x)) for all x\in U . From Remark 2 and Theorem 5, (1)–(3) are obtained.

    Due to Definitions 16 and 23, I(ID_{U}^{O}(x, [y]_{d})) = GD_{U}^{O}(x, N_{d}(y)) for all [y]_{d}\in U/R_{d}, x\in U . According to Theorem 7, (4)–(6) hold.

    By Definition 10 and Theorem 2, an IPL-reduct can be obtained by a prime implicant of f(\mathcal{IP}) . According to Definition 12 and Theorem 4, an IPU-reduct can be obtained by a prime implicant of f(\mathcal{IQ}) . By Definition 15 and Theorem 6, the IOL-reducts can be found from the prime implicants of f((\mathcal{ID}_{L}^{O})^{\ast}) . Due to Definition 17 and Theorem 8, the IOU-reducts can be found from the prime implicants of f(\mathcal{ID}_{U}^{O}) . We employ an example to illustrate the calculation method mentioned above.

    Example 8. An IDIS (U, A, d) is presented in Table 5, and \mathcal{A}^{I} = \{A_{1} = \{a_{1}, a_{2}, a_{3}\}, A_{2} = \{a_{4}, a_{5}\}, A_{3} = \{a_{6}, a_{7}\}, A_{4} = \{a_{8}, a_{9}, a_{10}\}\} . The granulars of elements are presented in Table 6.

    Table 5.  An IDIS.
    A_{1} A_{2} A_{3} A_{4} d
    a_{1} a_{2} a_{3} a_{4} a_{5} a_{6} a_{7} a_{8} a_{9} a_{10}
    x_{1} 2 1 2 1 1 3 1 1 2 1 1
    x_{2} \ast 1 3 2 1 1 2 \ast 3 2 2
    x_{3} 1 0 3 1 2 1 2 1 \ast 2 2
    x_{4} 2 2 \ast 2 0 1 \ast 0 2 1 2
    x_{5} 1 \ast \ast 1 \ast 3 1 1 2 \ast 1
    x_{6} 2 0 3 1 2 \ast 1 1 2 1 3

     | Show Table
    DownLoad: CSV
    Table 6.  The granulars of elements in Example 8.
    \ast x_{1} x_{2} x_{3} x_{4} x_{5} x_{6}
    S_{A_{1}}(x_{i}) \{x_{1}\} \{x_{2}, x_{5}\} \{x_{3}, x_{5}\} \{x_{4}\} \{x_{2}, x_{3}, x_{5}\} \{x_{6}\}
    S_{A_{2}}(x_{i}) \{x_{1}, x_{5}\} \{x_{2}\} \{x_{3}, x_{5}, x_{6}\} \{x_{4}\} \{x_{1}, x_{3}, x_{5}, x_{6}\} \{x_{3}, x_{5}, x_{6}\}
    S_{A_{3}}(x_{i}) \{x_{1}, x_{5}, x_{6}\} \{x_{2}, x_{3}, x_{4}\} \{x_{2}, x_{3}, x_{4}\} \{x_{2}, x_{3}, x_{4}, x_{6}\} \{x_{1}, x_{5}, x_{6}\} \{x_{1}, x_{4}, x_{5}, x_{6}\}
    S_{A_{4}}(x_{i}) \{x_{1}, x_{5}, x_{6}\} \{x_{2}, x_{3}\} \{x_{2}, x_{3}, x_{5}\} \{x_{4}\} \{x_{1}, x_{3}, x_{5}, x_{6}\} \{x_{1}, x_{5}, x_{6}\}
    IN_{\mathcal{A}^{I}}(x_{i}) \{x_{1}, x_{5}, x_{6}\} \{x_{2}, x_{3}, x_{4}, x_{5}\} \{x_{2}, x_{3}, x_{4}, x_{5}, x_{6}\} \{x_{2}, x_{3}, x_{4}, x_{6}\} \{x_{1}, x_{2}, x_{3}, x_{5}, x_{6}\} \{x_{1}, x_{3}, x_{4}, x_{5}, x_{6}\}

     | Show Table
    DownLoad: CSV

    We get that U/R_{d} = \{\{x_{1}, x_{5}\}, \{x_{2}, x_{3}, x_{4}\}, \{x_{6}\}\} . By Definition 3, \underline{\sum\limits_{\mathcal{A}^{I}} A_{k}^{O}}(\{x_{1}, x_{5}\}) = \{x_{1}\} , \underline{\sum\limits_{\mathcal{A}^{I}} A_{k}^{O}}(\{x_{2}, x_{3}, x_{4}\}) = \{x_{2}, x_{3}, x_{4}\} , \underline{\sum\limits_{\mathcal{A}^{I}} A_{k}^{O}}(\{x_{6}\}) = \{x_{6}\} , \overline{\sum\limits_{\mathcal{A}^{I}} A_{k}^{O}}(\{x_{1}, x_{5}\}) = \{x_{1}, x_{5}\} , \overline{\sum\limits_{\mathcal{A}^{I}} A_{k}^{O}}(\{x_{2}, x_{3} , x_{4}\}) = \{x_{2}, x_{3}, x_{4}\} , \overline{\sum\limits_{\mathcal{A}^{I}} A_{k}^{O}}(\{x_{6}\}) = \{x_{6}\} .

    According to Definition 23, we have

    \mathcal{ID}_{L}^{O} = \left ( \begin{array}{c|c} x_{i} & ID_{L}^{O}(x_{i}, N_{d}(x_{i})) \\ \hline x_{1} & \{A_{1},A_{2}\} \\ x_{2} & \{A_{2},A_{3},A_{4}\} \\ x_{3} & \{A_{3}\} \\ x_{4} & \{A_{1},A_{2},A_{4}\} \\ x_{5} & \mathcal{A}^{I} \\ x_{6} & \{A_{1}\} \\ \end{array} \right).

    Hence f(\mathcal{ID}_{L}^{O}) = (A_{1}\vee A_{2})\wedge (A_{2}\vee A_{3}\vee A_{4})\wedge (A_{3}) \wedge (A_{1}\vee A_{2}\vee A_{4}) \wedge (A_{1}) \wedge (A_{1}\vee A_{2}\vee A_{3}\vee A_{4}) = \; A_{1}\wedge A_{3} , then \{A_{1}, A_{3}\} is the IOL-reduct.

    By Definition 23, we get

    \mathcal{ID}_{U}^{O} = \left ( \begin{array}{c|ccc} \mathcal{ID}_{U}^{O}(x_{i}, [x_{j}]_{d}) & [x_{1}]_{d} = [x_{5}]_{d} & [x_{2}]_{d} = [x_{3}]_{d} = [x_{4}]_{d} & [x_{6}]_{d} \\ \hline \\ x_{1} & \mathcal{A}^{I} & \mathcal{A}^{I} & \{A_{1},A_{2}\} \\ x_{2} & \{A_{2},A_{3},A_{4}\} & \mathcal{A}^{I} & \mathcal{A}^{I} \\ x_{3} & \{A_{3}\} & \mathcal{A}^{I} & \{A_{1},A_{3},A_{4}\} \\ x_{4} & \mathcal{A}^{I} & \mathcal{A}^{I} & \{A_{1},A_{2},A_{4}\} \\ x_{5} & \mathcal{A}^{I} & \{A_{3}\} & \{A_{1}\}\\ x_{6} & \{A_{1}\} & \{A_{1},A_{4}\} & \mathcal{A}^{I} \\ \end{array} \right).

    Then, f(\mathcal{ID}_{U}^{O}) = (A_{2}\vee A_{3}\vee A_{4})\wedge (A_{3})\wedge (A_{1}) \wedge (A_{1}\vee A_{4}) \wedge (A_{1}\vee A_{2}) \wedge (A_{1}\vee A_{3}\vee A_{4}) \wedge (A_{1}\vee A_{2}\vee A_{4}) \wedge (A_{1}) \wedge (A_{1}\vee A_{2}\vee A_{3}\vee A_{4}) = \; A_{1} \wedge A_{3} . Thus, \{A_{1}, A_{3}\} is the IOU-reduct.

    The multi-granulation reduction structures of GNDISs based on multi-granulation rough sets have been discussed in this paper, and the discernibility matrices and discernibility functions have been constructed to calculate the multi-granulation reducts of GNDISs. Furthermore, the multi-granulation reductions of DMSs and IDISs have been characterized by the discernibility matrices and discernibility functions based on the reduction theory of GNDISs. Then, the multi-granulation reduction of GNDISs could be a general model for the multi-granulation reduction of DISs by discernibility technique, which provides a theoretical foundation for designing algorithms of multi-granulation reduction of DISs. We summarize the multi-granulation reducts of three kinds of DISs in Table 7. The discernibility method is a theoretical method for computing all the reducts, and the time consumption of the algorithm designed by computing the discernibility matrices and discernibility functions to get all the reducts of a high dimensional information system is high. Then, some heuristic reduction algorithms by discernibility matrices can be designed to get a reduct. Matrix computation or dynamic redution algorithms based on discernibility matrices could also be used to improve computational efficiency of reduction algorithms. In our further work, we will explore the multi-granulation reduction of partially labelled DISs by the discernibility technique.

    Table 7.  The multi-granulation reductions of GNDISs, DMSs and IDISs.
    Information system Granular structure Granularity transform GNIS or GNDIS Reduction Discernibility set
    GNDIS (U, \mathcal{N}, N_{d}) \mathcal{N}=\{N_{1}, N_{2}, \; \cdots, N_{m}\} \ast \ast GNPL-reduct GD_{L}^{P}(x, N_{d}(y))
    C_{N_{d}}=\{N_{d}(y)|y\in U\} \ast \ast GNPU-reduct GD_{U}^{P}(x, N_{d}(y))
    \ast \ast GNOL-reduct GD_{L}^{O}(x, N_{d}(y))
    \ast \ast GNOU-reduct GD_{U}^{O}(x, N_{d}(y))
    DMS (U, \mathcal{A}, d) \mathcal{A}=\{A_{k}\subseteq A|k=1, \cdots, m\} N_{k}(x)=[x]_{A_{k}} , (U, \mathcal{N}^{C}, N_{d}) CPL-reduct P(x) [21]
    U/R_{d}=\{[y]_{d}|y\in U\} N_{d}(y)=[y]_{d} \mathcal{N}^{C}=\{N_{k}|k=1, \cdots, m\} I(P(x))=GD_{L}^{P}(x, N_{d}(x))
    I(A_{k})=N_{k} CPU-reduct Q(x, y) [21]
    \{I(Q(y, x))|x, y\in U\}=
    \{GD_{U}^{P}(x, N_{d}(y))|x, y\in U\}
    COL-reduct MD(x)
    I(MD(x))=GD_{L}^{O}(x, N_{d}(x))
    COU-reduct G(x, y) [21]
    I(G(x, y))=GD_{U}^{O}(x, N_{d}(y))
    IDIS (U, A, d) \mathcal{A}^{I}=\{A_{k}\subseteq A|k=1, \cdots, m\} N_{k}(x)=S_{A_{k}}(x) , (U, \mathcal{N}^{I}, N_{d}) IPL-reduct IP(x) [32]
    U/R_{d}=\{[y]_{d}|y\in U\} N_{d}(y)=[y]_{d} \mathcal{N}^{I}=\{N_{k}|k=1, \cdots, m\} I(IP(x))=GD_{L}^{P}(x, N_{d}(x))
    I(A_{k})=N_{k} IPU-reduct IQ(x, y) [32]
    I(IQ(x, y))=GD_{U}^{P}(x, N_{d}(y))
    IOL-reduct ID_{L}^{O}(x, [x]_{d})
    I(ID_{L}^{O}(x, [x]_{d}))=GD_{L}^{O}(x, N_{d}(x))
    IOU-reduct ID_{U}^{O}(x, [y]_{d})
    I(ID_{U}^{O}(x, [y]_{d}))=GD_{U}^{O}(x, N_{d}(y))

     | Show Table
    DownLoad: CSV

    Yanlan Zhang: Conceptualization, Funding Acquisition, Formal analysis, Writing-Original Draft; Changqing Li: Conceptualization, Validation, Funding Acquisition, Writing-Review & Editing. All authors have read and approved the final version of the manuscript for publication.

    The authors declare that they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This work was supported by Grants from the Fujian Provincial Natural Science Foundation of China (Nos. 2022J01912, 2024J01803).

    The authors declare that there is no conflict of interests.



    [1] J. C. R. Alcantud, The semantics of N-soft sets, their applications, and a coda about three-way decision, Inform. Sciences, 606 (2022), 837–852. https://doi.org/10.1016/j.ins.2022.05.084 doi: 10.1016/j.ins.2022.05.084
    [2] D. G. Chen, C. Z. Wang, Q. H. Hu, A new approach to attribute reduction of consistent and inconsistent covering decision systems with covering rough sets, Inform. Sciences, 177 (2007), 3500–3518. https://doi.org/10.1016/j.ins.2007.02.041 doi: 10.1016/j.ins.2007.02.041
    [3] J. H. Dai, Z. Y. Wang, W. Y. Huang, Interval-valued fuzzy discernibility pair approach for attribute reduction in incomplete interval-valued information systems, Inform. Sciences, 642 (2023), 119215. https://doi.org/10.1016/j.ins.2023.119215 doi: 10.1016/j.ins.2023.119215
    [4] W. P. Ding, J. Nayak, B. Naik, D. Pelusi, M. Mishra, Fuzzy and real-coded chemical reaction optimization for intrusion detection in industrial big data environment, IEEE T. Ind. Inform., 17 (2021), 4298–4307. https://doi.org/10.1109/tii.2020.3007419 doi: 10.1109/tii.2020.3007419
    [5] H. Y. Gou, X. Y. Zhang, J. L. Yang, Z. Y. Lv, Three-way fusion measures and three-level feature selections based on neighborhood decision systems, Appl. Soft Comput., 148 (2023), 110842. https://doi.org/10.1016/j.asoc.2023.110842 doi: 10.1016/j.asoc.2023.110842
    [6] Q. H. Hu, D. R. Yu, J. F. Liu, C. X. Wu, Neighborhood rough set based heterogeneous feature subset selection, Inform. Sciences, 178 (2008), 3577–3594. https://doi.org/10.1016/j.ins.2008.05.024 doi: 10.1016/j.ins.2008.05.024
    [7] Y. Jiang, Y. Yu, Minimal attribute reduction with rough set based on compactness discernibility information tree, Soft Comput., 20 (2016), 2233–2243. https://doi.org/10.1007/s00500-015-1638-0 doi: 10.1007/s00500-015-1638-0
    [8] Q. Z. Kong, X. W. Zhang, W. H. Xu, S. T. Xie, Attribute reduction of multi-granulation information system, Artif. Intell. Rev., 53 (2020), 1353–1371. https://doi.org/10.1007/s10462-019-09699-3 doi: 10.1007/s10462-019-09699-3
    [9] M. Kryszkiewicz, Rough set approach to incomplete information systems, Inform. Sciences, 112 (1998), 39–49. https://doi.org/10.1016/S0020-0255(98)10019-1 doi: 10.1016/S0020-0255(98)10019-1
    [10] F. M. Ma, M. W. Ding, T. F. Zhang, J. Cao, Compressed binary discernibility matrix based incremental attribute reduction algorithm for group dynamic data, Neurocomputing, 344 (2019), 20–27. https://doi.org/10.1016/j.neucom.2018.01.094 doi: 10.1016/j.neucom.2018.01.094
    [11] F. M. Ma, T. F. Zhang, Generalized binary discernibility matrix for attribute reduction in incomplete information systems, J. China Univ. Posts Telecommun., 24 (2017), 57–68. https://doi.org/10.1016/s1005-8885(17)60224-3 doi: 10.1016/s1005-8885(17)60224-3
    [12] Z. Pawlak, Rough sets, Int. J. Comput. Inf. Sci., 11 (1982), 341–356. https://doi.org/10.1007/bf01001956 doi: 10.1007/bf01001956
    [13] Z. Pawlak, Rough sets: Theoretical aspects of reasoning about data, Boston: Kluwer Academic Publishers, 2012.
    [14] Y. H. Qian, S. Y. Li, J. Y. Liang, Z. Z. Shi, F. Wang, Pessimistic rough set based decisions: A multi-granulation fusion strategy, Inform. Sciences, 264 (2014), 196–210. https://doi.org/10.1016/j.ins.2013.12.014 doi: 10.1016/j.ins.2013.12.014
    [15] Y. H. Qian, J. Y. Liang, C. Y. Dang, Incomplete multi-granulation rough set, IEEE T. Syst. Man Cybern., 40 (2010), 420–431. https://doi.org/10.1109/tsmca.2009.2035436 doi: 10.1109/tsmca.2009.2035436
    [16] Y. H. Qian, J. Y. Liang, W. Pedrycz, C. Y. Dang, Positive approximation: An accelerator for attribute reduction in rough set theory, Artif. Intell., 174 (2010), 597–618. https://doi.org/10.1016/j.artint.2010.04.018 doi: 10.1016/j.artint.2010.04.018
    [17] Y. H. Qian, J. Y. Liang, W. Pedrycz, C. Y. Dang, An efficient accelerator for attribute reduction from incomplete data in rough set framework, Pattern Recogn., 44 (2011), 1658–1670. https://doi.org/10.1016/j.patcog.2011.02.020 doi: 10.1016/j.patcog.2011.02.020
    [18] Y. H. Qian, J. Y. Liang, Y. Y. Yao, C. Y. Dang, MGRS: A multi-granulation rough set, Inform. Sciences, 180 (2010), 949–970. https://doi.org/10.1016/j.ins.2009.11.023 doi: 10.1016/j.ins.2009.11.023
    [19] A. Skowron, Boolean reasoning for decision rules generation, In: Proceedings of the international symposium on methodologies for intelligent systems, 1993,295–305. https://doi.org/10.1007/3-540-56804-2_28
    [20] A. Skowron, C. Rauszer, The discernibility matrices and functions in information systems, In: R. Slowinski (ed), Intelligent decision support, Handbook of applications and advances of the rough sets theory, Kluwer Academic Publishers, Dordrecht, 1992. https://doi.org/10.1007/978-94-015-7975-9_21
    [21] A. H. Tan, W. Z. Wu, J. J. Li, T. J. Li, Reduction foundation with multi-granulation rough sets using discernibility, Artif. Intell. Rev., 53 (2020), 2425–2452. https://doi.org/10.1007/s10462-019-09737-0 doi: 10.1007/s10462-019-09737-0
    [22] C. Z. Wang, Q. He, D. G. Chen, Q. H. Hu, A novel method for attribute reduction of covering decision systems, Inform. Sciences, 254 (2014), 181–196. https://doi.org/10.1016/j.ins.2013.08.057 doi: 10.1016/j.ins.2013.08.057
    [23] W. Z. Wu, Knowledge reduction in random incomplete decision tables via evidence theory, Fund. Inform., 115 (2012), 203–218. https://doi.org/10.3233/fi-2012-650 doi: 10.3233/fi-2012-650
    [24] W. H. Xu, D. D. Guo, J. S. Mi, Y. H. Qian, K. Y. Zheng, W. P. Ding, Two-way concept-cognitive learning via concept movement viewpoint, IEEE T. Neur. Netw. Lear. Syst., 34 (2023), 6798–6812. https://doi.org/10.1109/tnnls.2023.3235800 doi: 10.1109/tnnls.2023.3235800
    [25] T. Yang, Y. F. Deng, B. Yu, Y. H. Qian, J. H. Dai, Local feature selection for large-scale data sets limited labels, IEEE T. Knowl. Data En., 35 (2023), 7152–7163. https://doi.org/10.1109/tkde.2022.3181208 doi: 10.1109/tkde.2022.3181208
    [26] Y. Y. Yang, D. G. Chen, X. Zhang, Z. Y. Ji, Covering rough set-based incremental feature selection for mixed decision system, Soft Comput., 26 (2022), 2651–2669. https://doi.org/10.1007/s00500-021-06687-0 doi: 10.1007/s00500-021-06687-0
    [27] Y. Y. Yao, Constructive and algebraic methods of theory of rough sets, Inform. Sciences, 109 (1998), 21–47. https://doi.org/10.1016/S0020-0255(98)00012-7 doi: 10.1016/S0020-0255(98)00012-7
    [28] Y. Y. Yao, Y. H. She, Rough set models in multi-granulation spaces, Inform. Sciences, 327 (2016), 40–56. https://doi.org/10.1016/j.ins.2015.08.011 doi: 10.1016/j.ins.2015.08.011
    [29] Y. Y. Yao, Y. Zhao, Discernibility matrix simplification for constructing attribute reducts, Inform. Sciences, 179 (2009), 867–882. https://doi.org/10.1016/j.ins.2008.11.020 doi: 10.1016/j.ins.2008.11.020
    [30] E. A. K. Zaman, A. Mohamed, A. Ahmad, Feature selection for online streaming high-dimensional data: A state-of-the-art review, Appl. Soft Comput., 127 (2022), 109355. https://doi.org/10.1016/j.asoc.2022.109355 doi: 10.1016/j.asoc.2022.109355
    [31] C. C. Zhang, H. Liu, Z. X. Lu, J. H. Dai, Fast attribute reduction by neighbor inconsistent pair selection for dynamic decision tables, Int. J. Mach. Learn. Cyber., 15 (2024), 739–756. https://doi.org/10.1007/s13042-023-01931-5 doi: 10.1007/s13042-023-01931-5
    [32] C. L. Zhang, J. J. Li, Y. D. Lin, Knowledge reduction of pessimistic multi-granulation rough sets in incomplete information systems, Soft Comput., 25 (2021), 12825–12838. https://doi.org/10.1007/s00500-021-06081-w doi: 10.1007/s00500-021-06081-w
    [33] J. Zhang, G. Q. Zhang, Z. W. Li, L. D. Qu, C. F. Wen, Feature selection in a neighborhood decision information system with application to single cell RNA data classification, Appl. Soft Comput., 113 (2021), 107876. https://doi.org/10.1016/j.asoc.2021.107876 doi: 10.1016/j.asoc.2021.107876
    [34] X. Y. Zhang, W. C. Zhao, Uncertainty measures and feature selection based on composite entropy for generalized multi-granulation fuzzy neighborhood rough set, Fuzzy Set. Syst., 486 (2024), 108971. https://doi.org/10.1016/j.fss.2024.108971 doi: 10.1016/j.fss.2024.108971
    [35] Y. Zhao, Y. Y. Yao, F. Luo, Data analysis based on discernibility and indiscernibility, Inform. Sciences, 177 (2007), 4959–4976. https://doi.org/10.1016/j.ins.2007.06.031 doi: 10.1016/j.ins.2007.06.031
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(377) PDF downloads(37) Cited by(0)

Figures and Tables

Tables(7)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog