1.
Introduction
With the development of information technology, the datasets with lots of features have been collected in many application fields. However, datasets usually contain many redundant features, which may affect the classification ability of datasets and increase the complexity of learning algorithms. Feature selection is a data preprocessing technology which selects a subset from the original feature set to improve the performance of learning algorithms. So far, feature selection has been applied to rule extraction [1], decision-making [4,24], and data mining [30].
Rough set theory [12] is a typical granular computing model and a meaningful mathematical tool for feature selection, which is also called attribute reduction in rough set theory. In rough set theory, the datasets are represented as information systems. Due to the diversity of the datasets, different types of information systems are discussed, whose attribute reduction structures are explored by different rough set models. For example, the attribute reduction of a complete decision information system (CDIS) was investigated based on the classical Pawlak rough set model, which is defined by equivalence (indiscernibility) relations or partitions[13,16,31]. The attribute reduction of an incomplete decision information system (IDIS) was discussed by relation rough sets[3,9,17,23]. The neighborhood rough set model, defined on neighborhood granularies, was used to attribute reduction of neighborhood DISs[5,6,33]. The covering rough sets defined on coverings were utilized for reducing covering DISs [2,22,25,26]. The rough set models mentioned above are constructed by a single granular knowledge. However, in real-world applications, more and more datasets should be described via multiple granular structures. Qian et al. proposed the concepts of multi-granulation rough sets in CDISs and discussed attribute reduction of CDISs based on multi-granulation rough sets [14,18]. Kong et al. explored multi-granulation reduction of information systems [8]. Attribute reduction of IDISs based on multi-granulation rough sets was explored in [15]. Zhang et al. defined a generalized multi-granulation fuzzy neighborhood rough set model and discussed a feature selection method by the model [34].
The technique of constructing discernibility matrices and discernibility functions, proposed by Skowron and Rauszer [20] and Skowron [19], is an important attribute reduction method. Yao and Zhao defined a minimal family of discernibility sets based on the family of discernibility sets to compute the reducts of CDISs [29]. Zhao et al. constructed the relative discernibility matrix of a CDIS to get relative reducts[35]. Jiang and Yu proposed a compactness discernibility information tree to obtain the minimal attribute reduction of a CDIS [7]. Ma et al. introduced a compressed binary discernibility matrix and designed an incremental attribute reduction algorithm for getting an attribute reduction set of a dynamic CDIS [10]. A binary discernibility matrix was designed to get attribute relative reducts of an IDIS [11]. Chen et al. [2], Wang et al. [22] and Yang et al. [25] constructed discernibility matrices to obtain the reducts of covering DISs. The discernibility techniques are also used to achieve the knowledge reduction of information systems based on multi-granulation rough sets. Tan et al. constructed discenibility matrices and discernibility functions to calculate the attribute reducts of a decision multi-granulation space (DMS) [21] and verified the effectiveness of the reduction methods by numerical experiments. However, the optimistic lower reducts are not calculated by discenibility matrices in [21]. Zhang et al. explored the attribute reduction of an IDIS by the discernibility approach in multi-granulation rough set theory [32] and presented numerical experiments to show the feasibility and effectiveness of the algorithms to get reducts. However, the attribute reduction of IDISs based on optimistic multi-granulation rough sets is not considered in [32].
The purpose of this paper is to analyze and compare the multi-granulation reduction theory of DMSs [21] and IDISs [32] from discernibility, and to present a general model for the multi-granulation reduction of DISs by the discernibility technique, which can provide a theoretical basis for the multi-granulation reduction of DISs based on discernibility. The notation of a GNDIS is introduced in this paper, the multi-granulation reductions of GNDISs based on the multi-granulation rough set are discussed, and discernibility matrices are constructed to compute the multi-granulation reducts of GNDISs. Then, the pessimistic (or optimistic) approximations in DMSs and IDISs can be changed into the multi-granulation pessimistic (or optimistic) approximations in GNDISs. Moreover, the pessimistic multi-granulation reducts and the optimistic multi-granulation reducts of DMSs discussed in [21] can be computed by the discernibility matrices and discernibility functions based on the multi-granulation reduction theory of GNDISs. Additionally, the pessimistic multi-granulation reduction structures of IDISs discussed in [32] are characterized by the reduction theory of GNDISs based on discernibility technique.
The remaining structure of this paper is organized as follows. In Section 2, the definitions about multi-granulation rough sets are reviewed. In Section 3, we introduce the definition of the multi-granulation rough sets in a GNDIS and discuss knowledge reduction of a GNDIS based on the multi-granulation rough sets. The discernibility matrices and discernibility functions are constructed to characterize the multi-granulation reducts of a GNDIS. In Section 4, relationships between the multi-granulation reduction of DMSs and that of GNDISs are discussed. Moreover, the optimistic lower reducts of DMSs are discussed by the discernibility matrices in this section. Section 5 explores relationships between the multi-granulation reduction of IDISs and that of GNDISs. Then, the optimistic multi-granulation reductions of IDISs from the discernibility technique are presented in this section. Section 6 concludes this study.
2.
Preliminary knowledge on multi-granulation rough sets
In this section, we review some basic concepts about multi-granulation rough sets, which were proposed by Qian et al. [18] to approximate a target concept using multiple binary relations.
2.1. Multi-granulation rough sets in DMSs
Suppose that (U,A,d) is a DIS, in which U is the universe, A is a family of condition attributes where a:U→Va for any a∈A, Va is the value set of a, and d:U→Vd is a decision attribute where Vd is the value set of d.
In a DIS (U,A,d), A={Ak|Ak⊆A,k=1,2,⋯,m} is a family of attribute subsets. Then, (U,A,d) is called an DMS [28]. Each Ak∈A induces an equivalent relation RAk={(x,y)∈U×U|∀a∈Ak(a(x)=a(y))} and generates a granular structure U/RAk={[x]Ak|x∈U}, in which [x]Ak={y∈U|(x,y)∈RAk}. The decision attribute d generates a partition U/Rd={[x]d|x∈U}≜{X1,X2,⋯,Xn}, each of which is the decision class with the same decision attribute values. The pessimistic multi-granulation lower and upper approximations and optimistic multi-granulation lower and upper approximations were discussed by Qian et al.[14,18].
Definition 1. [14] Given a DMS (U,A,d) with A={Ak⊆A|k∈Z,1≤k≤m}, let X⊆U. Define the pessimistic multi-granulation lower and upper approximations of X as
∑AAPk_(X)={x∈U|([x]A1⊆X)∧([x]A2⊆X)∧⋯∧([x]Am⊆X)},
¯∑AAPk(X)=∼∑AAPk_(∼X).
One calls (∑AAPk_(X), ¯∑AAPk(X)) a pessimistic multi-granulation rough set.
For each x∈U, H⊆A, denote ∪A∈H[x]A by NH(x). Then, it is easy to get that ∑AAPk_(X)={x∈U|NA(x)⊆U}.
Definition 2. [18] Given a DMS (U,A,d) with A={Ak⊆A|k∈Z,1≤k≤m}, let X⊆U. Define the optimistic multi-granulation lower and upper approximations of X as
∑AAOk_(X)={x∈U|([x]A1⊆X)∨([x]A2⊆X)∨⋯∨([x]Am⊆X)},
¯∑AAOk(X)=∼∑AAOk_(∼X).
(∑AAPk_(X), ¯∑AAPk(X)) is called an optimistic multi-granulation rough set.
2.2. Multi-granulation rough sets in IDISs
If some attribute values of attributes in A of a DIS (U,A,d) are are missing or unknown, then the missing attribute value is expressed by special symbol '∗' and (U,A,d) is termed as an IDIS. For any Ak⊆A, a tolerance relation is defined by Kryszkiewicz as [9]:
SIM(Ak)={(x,y)∈U×U|∀a∈Ak(a(x)=a(y)∨a(x)=∗∨a(y)=∗)}.
A granular structure is induced by Ak as U/SIM(Ak)={SAk(x)|x∈U}, where SAk(x)={y∈U|(x,y)∈SIM(Ak)}.
The multi-granulation rough sets in IDISs were introduced by Qian et al. [15].
Definition 3. [15] Given an IDIS (U,A,d) with AI={Ak⊆A|k∈Z,1≤k≤m}, let X⊆U. Define the pessimistic multi-granulation lower and upper approximations of X as
∑AIAPk_(X)={x∈U|(SA1(x)⊆X)∧(SA2(x)⊆X)∧⋯∧(SAm(x)⊆X)},
¯∑AIAPk(X)=∼∑AIAPk_(∼X).
Define the optimistic multi-granulation lower and upper approximations of X
∑AIAOk_(X)={x∈U|(SA1(x)⊆X)∨(SA2(x)⊆X)∨⋯∨(SAm(x)⊆X)},
¯∑AIAOk(X)=∼∑AIAOk_(∼X).
(∑AIAPk_(X),¯∑AIAPk(X)) and (∑AIAOk_(X),¯∑AIAOk(X)) are, respectively, the pessimistic multi-granulation rough set and optimistic multi-granulation rough set of X.
For each x∈U, H⊆AI, denote ∪A∈HSA(x) by INH(x). It is clear that ∑AIAPk_(X)={x∈U|INAI(x)⊆U}.
3.
Multi-granulation reduction of GNDISs
In this section, we present the definitions of multi-granulation rough sets in GNDISs and explore multi-granulation reduction of GNDISs. Throughout this paper, the universe of discourse U is nonempty and finite. The family of all subsets of U is denoted by P(U). For X⊆U, ∼X is the complementary set of X.
3.1. Multi-granulation rough sets in GNDISs
Some basic concepts about the neighborhood operator is presented in [27].
Definition 4. [27] Let U be the universe. A mapping N:U→P(U) is called a neighborhood operator. If x∈N(x) for all x∈U, N is a reflexive neighborhood operator. If x∈N(y)⇒y∈N(x) for all x,y∈U, N is a symmetric neighborhood operator. If [y∈N(x),z∈N(y)]⇒z∈N(x) for all x,y,z∈U, N is a transitive neighborhood operator. If the neighborhood operator N is reflexive, symmetric and transitive, N is called a Pawlak neighborhood operator.
Clearly, {N(x)|x∈U} of a Pawlak neighborhood operator N forms a partition of U.
Definition 5. Let N be a reflexive neighborhood operator on U. Denote {N(x)|x∈U} by CN. The ordered pair (U,N) is called a generalized neighborhood approximation space.
Clearly, CN from (U,N) is a covering. We introduce the generalized neighborhood multi-granulation rough sets now.
Definition 6. Let N1,N2,⋯,Nm(m≥2) be reflexive neighborhood operators on U and N={N1,N2,⋯, Nm}, Nd:U→P(U) be a Pawlak neighborhood operator, then (U,N,Nd) is called a GNDIS. For X⊆U, define the generalized neighborhood pessimistic lower approximation ∑NNPk_(X) and pessimistic upper approximation ¯∑NNPk(X) by
∑NNPk_(X)={x∈U|(N1(x)⊆X)∧(N2(x)⊆X)∧⋯∧(Nm(x)⊆X)},
¯∑NNPk(X)=∼∑NNPk_(∼X).
(∑NNPk_(X),¯∑NNPk(X)) is the generalized neighborhood pessimistic multi-granulation rough set of X.
For H⊆N and x∈U, denote GNH(x)=⋃N∈HN(x). Then, it is clear that ∑NNPk_(X)={x∈U|GNN(x)⊆X}.
Proposition 1. Let (U,N,Nd) be a GNDIS, X,Y⊆U, H⊆N and H≠∅, then:
(1) ∑NNPk_(∅)=∅, ∑NNPk_(U)=U, ¯∑NNPk(∅)=∅, ¯∑NNPk(U)=U.
(2) ∑NNPk_(X)⊆X⊆¯∑NNPk(X).
(3) X⊆Y⇒∑NNPk_(X)⊆∑NNPk_(Y), ¯∑NNPk(X)⊆¯∑NNPk(Y).
(4) ∑NNPk_(X∩Y)=∑NNPk_(X)∩∑NNPk_(Y), ¯∑NNPk(X∪Y)=¯∑NNPk(X)∪¯∑NNPk(Y).
(5) ∑NNPk_(∑NNPk_(X))⊆∑NNPk_(X), ¯∑NNPk(X)⊆¯∑NNPk(¯∑NNPk(X)).
(6) ∑NNPk_(X)⊆∑HNPk_(X), ¯∑HNPk(X)⊆¯∑NNPk(X).
Proof. It is verified by Definition 6. □
Definition 7. Given a GNDIS (U,N,Nd) with N={N1,N2,⋯,Nm}, let X⊆U. Define the generalized neighborhood optimistic lower approximation ∑NNOk_(X) and optimistic upper approximation ¯∑NNOk(X) of X as
∑NNOk_(X)={x∈U|(N1(x)⊆X)∨(N2(x)⊆X)∨⋯∨(Nm(x)⊆X)},
¯∑NNOk(X)=∼∑NNOk_(∼X).
(∑NNOk_(X),¯∑NNOk(X)) is the generalized neighborhood optimistic multi-granulation rough set of X.
Proposition 2. Let (U,N,Nd) be a GNDIS, X,Y⊆U, H⊆N and H≠∅, then:
(1) ∑NNOk_(∅)=∅, ∑NNOk_(U)=U, ¯∑NNOk(∅)=∅, ¯∑NNOk(U)=U.
(2) ∑NNOk_(X)⊆X⊆¯∑NNOk(X).
(3) X⊆Y⇒∑NNOk_(X)⊆∑NNOk_(Y), ¯∑NNOk(X)⊆¯∑NNOk(Y).
(4) ∑NNOk_(X∩Y)⊆∑NNOk_(X)∩∑NNOk_(Y), ¯∑NNOk(X)∪¯∑NNOk(Y)⊆¯∑NNOk(X∪Y).
(5) ∑NNOk_(∑NNOk_(X))⊆∑NNOk_(X), ¯∑NNOk(X)⊆¯∑NNOk(¯∑NNOk(X)).
(6) ∑HNOk_(X)⊆∑NNOk_(X), ¯∑NNOk(X)⊆¯∑HNOk(X).
Proof. It is easy to obtain the conclusion by Definition 7. □
3.2. Pessimistic multi-granulation reduction of GNDISs
In a GNDIS (U,N,Nd), CNd={Nd(y)|y∈U} is a partition of U. In the following, the GNDIS mentioned satisfies |CNd|≥2. In this subsection, we present pessimistic multi-granulation reduction of GNDISs.
Definition 8. Given a GNDIS (U,N,Nd), let H⊆N.
(1) If ∑NNPk_(Nd(y))=∑HNPk_(Nd(y)) for all y∈U, then we say that H is a generalized neighborhood pessimistic lower consistent set (GNPL-consistent set). Denote the family of all GNPL-consistent sets by ConsPL(N). If H∈ConsPL(N), and H′∉ConsPL(N) whenever H′⊂H, then H is called a GNPL-reduct. Denote the set of all GNPL-reducts by RedPL(N), the core w.r.t. GNPL-reducts is defined as CorePL(N)=⋂{H|H∈RedPL(N)}.
(2) If ¯∑NNPk(Nd(y))=¯∑HNPk(Nd(y)) for all y∈U, then we say that H is a generalized neighborhood pessimistic upper consistent set (GNPU-consistent set). Denote the family of all GNPU-consistent sets by ConsPU(N). If H∈ConsPU(N), and H′∉ConsPU(N) whenever H′⊂H, then H is called a GNPU-reduct. Denote the set of all GNPU-reducts as RedPU(N), the core w.r.t. GNPU-reducts is defined by CorePU(N)=⋂{H|H∈RedPU(N)}.
From Definition 8, we can see that a GNPL-reduct (or a GNPU-reduct) is a minimal subset of N, which preserves the pessimistic lower approximations (or the pessimistic upper approximations) of all sets in CNd. The pessimistic lower and upper reducts of a GNDIS are different, which are illustrated by an example in the following.
Example 1. (1) A GNDIS (U,N,Nd) is presented in Table 1, where U={x1,x2,⋯,x6} and N={N1,N2,⋯,N5}. The generalized neighborhood granules of x∈U are presented in Table 2.
By Definition 6, we get that ∑NNPk_({x1,x2,x3})={x1}, ∑NNPk_({x4,x5})=∅, ∑NNPk_({x6})=∅, ¯∑NNPk({x1,x2,x3})=U, ¯∑NNPk({x4,x5})={x2,x3,x4,x5}, ¯∑NNPk({x6})={x3,x6}.
Let H={N3,N4}. We compute that ∑HNPk_(Nd(xi))=∑NNPk_(Nd(xi)) for i=1,2,⋯,6. Thus, H is a GNPL-consistent set. Let H1={N3}. We get that ∑H1NPk_({x4,x5})={x5}≠∑NNPk_({x4,x5}), which follows that H1 is not a GNPL-consistent set. Let H2={N4}. We obtain that ∑H2NPk_({x1,x2,x3})={x1,x2,x3}≠∑NNPk_({x1,x2,x3}), which implies that H2 is not a GNPL-consistent set. So H is a GNPL-reduct. Due to ¯∑HNPk({x4,x5})={x2,x4,x5}≠¯∑NNPk({x4,x5}), H is not a GNPU-consistent set, then H is not a GNPU-reduct. At the same time, we get a conclusion: A GNPL-reduct is not necessarily a GNPU-reduct.
(2) The operator Nd:U→P(U) in (1) is changed by: Nd(x1)=Nd(x2)=Nd(x3)={x1,x2,x3}, Nd(x4)=Nd(x5)=Nd(x6)={x4,x5,x6}. Then, we get another GNDIS (U,N,Nd). By Definition 6, we have that ∑NNPk_({x1,x2,x3})={x1}, ∑NNPk_({x4,x5,x6})=∅, ¯∑NNPk({x1,x2,x3})=U, ¯∑NNPk({x4,x5,x6})={x2,x3,x4,x5,x6}.
Let H={N2}. We obtain that ¯∑HNPk(Nd(xi))=¯∑NNPk(Nd(xi)) for i=1,2,⋯,6. Thus, H is a GNPU-reduct. It follows from ∑HNPk_({x1,x2,x3})={x1,x3}≠¯∑NNPk({x1,x2,x3}) that H is not a GNPL-consistent set, then H is not a GNPL-reduct. Hence, we obtain a conclusion: A GNPU-reduct is not necessarily a GNPL-reduct.
A matrix is constructed to compute all the GNPL-reducts.
Definition 9. Consider that (U,N,Nd) is a GNDIS, where N={N1,N2,⋯,Nm}. Letting x∈U,Nd(y)∈CNd, define
GDPL={GDPL(x,Nd(y))|x∈U,Nd(y)∈CNd} is called a GNPL-discernibility matrix (GNPL-D matrix) of (U,N,Nd).
Proposition 3. Let (U,N,Nd) be a GNDIS, whose GNPL-D matrix is GDPL={GDPL(x,Nd(y))|x∈U,Nd(y)∈CNd}. Then,
(1) ∀x∈U, GDPL(x,Nd(x))≠∅.
(2) ∀x∈U, Nd(y)∈CNd with x∉Nd(y), GDPL(x,Nd(y))=N.
Proof. (1) ∀x∈U, if GNN(x)⊈Nd(x), then there is an Nk∈N satisfying Nk(x)⊈Nd(x). Hence Nk∈GDPL(x,Nd(x)), which implies that GDPL(x,Nd(x))≠∅. If GNN(x)⊆Nd(x), by Definition 9, then GDPL(x,Nd(x))=N≠∅.
(2) For any x∈U, Nd(y)∈CNd with x∉Nd(y), we get that GNN(x)⊈Nd(y) and Nk(x)⊈Nd(y) for all Nk∈N. Thus, GDPL(x,Nd(y))={Nk∈N|Nk(x)⊈Nd(y)}=N. □
By Proposition 3, ∀x∈U, Nd(y)∈CNd, GDPL(x,Nd(y))≠∅. Utilizing the GNPL-D matrix, the GNPL-reducts can be characterized.
Theorem 1. Suppose that (U,N,Nd) is a GNDIS, where N={N1,N2,⋯,Nm}. Letting H⊆N and Nk∈N,
(1) H∈ConsPL(N)⇔H∩GDPL(x,Nd(y))≠∅ for all x∈U,Nd(y)∈CNd.
(2) H∈RedPL(N)⇔H∩GDPL(x,Nd(y))≠∅ for all x∈U,Nd(y)∈CNd, and for any H0⊂H, there exists a GDPL(x,Nd(y)) such that GDPL(x,Nd(y))∩H0=∅.
(3) Nk∈ CorePL(N) ⇔∃x∈U,Nd(y)∈CNd, GDPL(x,Nd(y))={Nk}.
Proof. (1) "⇒". ∀x∈U, Nd(y)∈CNd, if GNN(x)⊈Nd(y), then x∉∑NNPk_(Nd(y)). Since H is a GNPL-consistent set, ∑NNPk_(Nd(y))=∑HNPk_(Nd(y)). Hence, x∉∑HNPk_(Nd(y)). It implies that GNH(x)=∪Nk∈HNk(x)⊈Nd(y). Then, we can find an Nk∈H such that Nk(x)⊈Nd(y). Therefore, H∩GDPL(x,Nd(y))≠∅. If GNH(x)⊆Nd(y), then GDPL(x,Nd(y))=N. It is clear that H∩GDPL(x,Nd(y))≠∅.
"⇐". ∀y∈U, by Proposition 1(6), ∑NNPk_(Nd(y))⊆∑HNPk_(Nd(y)). ∀x∉∑NNPk_(Nd(y)), we get that GNN(x)⊈Nd(y). Since H∩GDPL(x,Nd(y))≠∅, let Nk∈H∩GDPL(x,Nd(y)). Thus, according to Definition 9, Nk(x)⊈Nd(y). It follows that GNH(x)=∪Nk∈HNk(x)⊆Nd(y). Therefore, x∉∑HNPk_(Nd(y)), which implies that ∑HNPk_(Nd(y))⊆∑NNPk_(Nd(y)). We can get that ∑NNPk_(Nd(y))=∑HNPk_(Nd(y)).
(2) It is verified from (1).
(3) "⇒". If not, for every GDPL(x,Nd(y))∈GDPL satisfying Nk∈GDPL(x,Nd(y)), we have |GDPL(x,Nd(y))|≥2. Let H=∪{GDPL(x,Nd(y))−{Nk}|x,y∈U}, then H is a GNPL-consistent set. Thus, there exists a GNPL-reduct H0⊆H and Nk∉H0, which contradicts the fact that Nk∈ CorePL(N).
"⇐". If not, we can find a reduct H such that Nk∉H. Since GDPL(x,Nd(y))={Nk}, we obtain that y∈GNN(x), y∈Nk(x) and y∉Nl(x) for all l≠k(l∈{1,2,⋯,m}). Then, y∉∪l≠kNl(x). Since Nk∉H, GNH(x)⊆∪l≠kNl(x). It implies that y∉GNH(x). Hence GNN(x)≠GNH(x), which contradicts the fact that H is a GNPL-consistent set. □
Definition 10. Let (U,N,Nd) be a GNDIS, whose GNPL-D matrix is GDPL={GDPL(x,Nd(y))|x∈U,Nd(y)∈CNd}. Define f(GDPL)=∧{∨GDPL(x,Nd(y))|GDPL(x,Nd(y))∈GDPL}.
∨GDPL(x,Nd(y)) is the disjunction of all neighborhood operators in GDPL(x,Nd(y)), and ∧{∨GDPL(x,Nd(y))|GDPL(x,Nd(y))∈GDPL} is the conjunction of ∨GDPL(x,Nd(y)).
Theorem 2. Let H={N1,N2,⋯,Nk}⊆N. H∈RedPL(N)⇔N1∧N2∧⋯∧Nk is a prime implicant of f(GDPL).
Proof. It is trivial based on Definition 10. □
Remark 1. The GDPL in Definition 9 can be simplified as (GDPL)∗={GDPL(x,Nd(x))|x∈U}.
In fact, due to Proposition 3, for any x∈U and Nd(y)∈CNd with x∉Nd(y), ∅≠GDPL(x,Nd(x))⊆GDPL(x,Nd(y))=N. Then, by Definition 10, f((GDPL)∗)=f(GDPL).
We employ Example 2 below to explain the discernibility method for calculating all the GNPL-reducts of a GNDIS.
Example 2. Continued from Example 1(1). By Definition 9, we obtain that
From Remark 1, we have that
By Theorem 1(3), CorePL(N)=∅. According to Definition 10, f(GDPL)=f((GDPL)∗)=(N1∨N2∨N3∨N5)∧(N1∨N3∨N5)∧(N1∨N2∨N4∨N5)∧(N1∨N2∨N3∨N4∨N5)=(N1)∨(N2∧N3)∨(N3∧N4)∨(N5). Then, {N1}, {N2,N3}, {N3,N4} and {N5} are GNPL-reducts.
By the analysis above, we present Algorithm 1 to calculate all the GNPL-reducts of a GNDIS. In Algorithm 1, the time complexity of Steps 1–11 is O(|U|2|N|), and the time complexity of Steps 2–17 is O(∏GD∈GDPL|GD|). The total time complexity of Algorithm 1 is O(|U|2|N|+∏GD∈GDPL|GD|).
We construct a discernibility matrix to get all the GNPU-reducts as follows:
Definition 11. Suppose that (U,N,Nd) is a GNDIS, where N={N1,N2,⋯,Nm}. Letting x∈U and Nd(y)∈CNd, define
GDPU={GDPU(x,Nd(y))|x∈U,Nd(y)∈CNd} is called a GNPU-discernibility matrix (GNPU-D matrix) of (U,N,Nd).
Proposition 4. ∀x∈U, GDPU(x,Nd(x))=N.
Proof. ∀x∈U, GNN(x)∩Nd(x)≠∅, then GDPU(x,Nd(x))={Nk∈N|Nk(x)∩Nd(x)≠∅}=N. □
Theorem 3. Consider that (U,N,Nd) is a GNDIS, where N={N1,N2,⋯,Nm}. Letting H⊆N and Nk∈N,
(1) H∈ConsPU(N)⇔H∩GDPU(x,Nd(y))≠∅ for all GDPU(x,Nd(y))≠∅.
(2) H∈RedPU(N)⇔H∩GDPU(x,Nd(y))≠∅ for all GDPU(x,Nd(y))≠∅, and for any H0⊂H, there exists a GDPU(x,Nd(y))≠∅ such that GDPU(x,Nd(y))∩H0=∅.
(3) Nk∈ CorePU(N) ⇔∃x∈U,Nd(y)∈CNd, GDPU(x,Nd(y))={Nk}.
Proof. (1) "⇒". ∀x∈U,Nd(y)∈CNd, if GDPU(x,Nd(y))≠∅, then GNN(x)∩Nd(y)≠∅, which follows that x∈¯∑NNPk(Nd(y)). Since H is a GNPU-consistent set, ¯∑NNPk(Nd(y))=¯∑HNPk(Nd(y)). It implies that x∈¯∑HNPk(Nd(y)). Hence, there is an Nk∈H satisfying Nk(x)∩Nd(y)≠∅. By Definition 11, Nk∈GDPU(x,Nd(y)). Thus, H∩GDPU(x,Nd(y))≠∅.
"⇐". ∀y∈U, by Proposition 1(6), ¯∑HNPk(Nd(y))⊆¯∑NNPk(Nd(y)). ∀x∈¯∑NNPk(Nd(y)), GNN(x)∩Nd(y)≠∅. It follows from H∩GDPU(x,Nd(y))≠∅ that there exists an Nk∈N such that Nk∈H∩GDPU(x,Nd(y)). Hence Nk(x)∩Nd(y)≠∅, which implies that GNH(x)∩Nd(y)≠∅. Then, x∈¯∑HNPk(Nd(y)). So ¯∑NNPk(Nd(y))⊆¯∑HNPk(Nd(y)).
(2) It is easy to obtain (2) by (1).
(3) Similar to the proof of (3) in Theorem 1. □
Definition 12. Let (U,N,Nd) be a GNDIS, whose GNPU-D matrix is GDPU={GDPU(x,Nd(y))|x∈U,Nd(y)∈CNd}. Define f(GDPU)=∧{∨GDPU(x,Nd(y))|GDPU(x,Nd(y))∈GDPU,GDPU(x,Nd(y))≠∅}.
Theorem 4. Let H={N1,N2,⋯,Nk}⊆N. H∈ConsPU(N)⇔N1∧N2∧⋯∧Nk is a prime implicant of f(GDPU).
Proof. It is clear based on Definition 12. □
By Theorem 4, the set of all GNPU-reducts in a GNDIS and the set of all prime implicants of f(GDPU) are the one-to-one correspondence. Example 3 is employed to illustrate the above theorems.
Example 3. Continued from Example 1(1). By Definition 11, we get that
According to Theorem 3, CorePU(N)={N1}. By Definition 12, f(GDPU)=(N1)∧(N3∨N5)∧(N1∨N2∨N4∨N5)∧(N1∨N2∨N3∨N5)∧(N1∨N2∨N3∨N4∨N5)=(N1∧N3)∨(N1∧N5).
Then, {N1,N3} and {N1,N5} are GNPU-reducts.
3.3. Optimistic multi-granulation reduction of GNDISs
In this subsection, we discuss optimistic multi-granulation reduction of GNDISs.
Definition 13. Let (U,N,Nd) be a GNDIS.
(1) H⊆N is a generalized neighborhood optimistic lower consistent set (GNOL-consistent set) if ∑NNOk_(Nd(y))=∑HNOk_(Nd(y)) for every y∈U. Denote the family of all GNOL-consistent sets by ConsOL(N). If H∈ConsOL(N), and H′∉ConsOL(N) whenever H′⊂H, then H is said to be a GNOL-reduct. Denote the set of all GNOL-reducts as RedOL(N), the core w.r.t. GNOL-reducts is defined by CoreOL(N)=⋂{H|H∈RedOL(N)}.
(2) H⊆N is a generalized neighborhood optimistic upper consistent set (GNOU-consistent set) if ¯∑NNOk(Nd(y))=¯∑HNOk(Nd(y)) for all y∈U. Denote the family of all GNOU-consistent sets by ConsOU(N). If H∈ConsOU(N), and H′∉ConsOU(N) whenever H′⊂H, then H is said to be a GNOU-reduct. Denote the set of all GNOU-reducts by RedOU(N), the core w.r.t. GNOU-reducts is defined as CoreOU(N)=⋂{H|H∈RedOU(N)}.
By Definition 13, a GNOL-reduct (or GNOU-reduct) is a minimal subset of N that maintains the optimistic lower approximations (or optimistic upper approximations) of all Nd(y)∈CNd. The GNOL-reduct and GNOU-reduct are different as illustrated by the next example.
Example 4. Continued from Example 1(1). Change the Pawlak neighborhood operator Nd:U→P(U) in Example 1(1) by: Nd(x1)=Nd(x3)={x1,x3}, Nd(x2)=Nd(x4)=Nd(x5)=Nd(x6)={x2,x4, x5,x6}. Then, we get a new GNDIS (U,N,Nd). According to Definition 7, ∑NNOk_({x1,x3})={x1}, ∑NNOk_({x2,x4,x5,x6})={x2,x5,x6}, ¯∑NNOk({x1,x3})={x1,x3,x4,x6}, ¯∑NNOk({x2,x4,x5,x6})={x2,x4,x5,x6}.
Let H={N1,N5}. We compute that ∑HNOk_(Nd(xi))=∑NNOk_(Nd(xi)) for i=1,2,⋯,6. Thus, H is a GNOL-consistent set. Let H1={N1}. We get that ∑H1NOk_({x1,x3})=∅≠∑NNOk_({x1,x3}), which follows that H1 is not a GNOL-consistent set. Let H2={N5}. We obtain that ∑H2NOk_({x2,x4,x5,x6})={x2}≠∑NNOk_({x2,x4,x5,x6}), which implies that H2 is not a GNOL-consistent set. So, H is a GNOL-reduct. Due to ¯∑HNPk({x4,x5})={x2,x4,x5}≠¯∑NNPk({x4,x5}), H is not a GNOU-consistent set, then H is not a GNOU-reduct.
Let H={N2,N3}. We obtain that ¯∑HNOk(Nd(xi))=¯∑NNOk(Nd(xi)) for i=1,2,⋯,6. Thus, H is a GNOU-consistent set. Let H1={N2}. Then, ¯∑H1NOk({x1,x3})={x1,x3,x4,x5,x6}≠¯∑NNOk({x1,x3}), which implies that H1 is not a GNOU-consistent set. Let H2={N3}. Then, ¯∑H2NOk({x2,x4,x5,x6})=U≠¯∑NNOk({x1,x3}), which follows that H2 is not a GNOU-consistent set. Hence, H is a GNOU-reduct. It follows from ∑HNOk_({x2,x4,x5,x6})={x2,x5}≠∑NNOk_({x2,x4,x5,x6}) that H is not a GNOL-consistent set, then H is not a GNOL-reduct.
Hence, we get that a GNOU-reduct is not necessarily a GNOL-reduct, and a GNOL-reduct is not necessarily a GNOU-reduct. In the following, we calculate GNOL-reducts and GNOL-reducts of a GNDIS by the discernibility technique.
Definition 14. Consider a GNDIS (U,N,Nd), where N={N1,N2,⋯,Nm}. ∀x∈U,Nd(y)∈CNd, and define
GDOL={GDOL(x,Nd(y))|x∈U,Nd(y)∈CNd} is called a GNOL-discernibility matrix (GNOL-D matrix) of (U,N,Nd).
Theorem 5. In a GNDIS (U,N,Nd) with N={N1,N2,⋯,Nm}, let ∅≠H⊆N, Nk∈N, then
(1) H∈ConsOL(N)⇔H∩GDOL(x,Nd(y))≠∅ for each GDOL(x,Nd(y))∈GDOL.
(2) H∈RedOL(N)⇔H∩GDPL(x,Nd(y))≠∅ for all GDOL(x,Nd(y))∈GDOL, and for any H0⊂H, there exist some GDOL(x,Nd(y))∈GDOL such that GDOL(x,Nd(y))∩H0=∅.
(3) Nk∈ CoreOL(N) ⇔∃x∈U,Nd(y)∈CNd, GDOL(x,Nd(y))={Nk}.
Proof. (1) "⇒". ∀x∈U,Nd(y)∈CNd, if x∈∑NNOk_(Nd(y))=∑HNOk_(Nd(y)), according to Definition 7, we can find an Nk∈H such that Nk(x)⊆Nd(y). Thus, Nk∈GDOL(x,Nd(y)). It follows that H∩GDOL(x,Nd(y))≠∅. If x∉∑NNOk_(Nd(y)), then GDOL(x,Nd(y))=N. It is clear that H∩GDPL(x,Nd(y))≠∅.
"⇐". ∀y∈U, by Proposition 2(6), ∑HNOk_(Nd(y))⊆∑NNOk_(Nd(y)). ∀x∈∑NNOk_(Nd(y)), we obtain that GDOL(x,Nd(y))≠∅. Since H∩GDOL(x,Nd(y))≠∅, let Nk∈H∩GDOL(x,Nd(y)), then Nk(x)⊆Nd(y). Hence, x∈∑NNOk_(Nd(y)). It implies that ∑NNOk_(Nd(y))⊆∑HNOk_(Nd(y)).
(2) It is verified by (1).
(3) With the reference to the proof of (3) in Theorem 1. □
Proposition 5. ∀x∈U,Nd(y)∈CNd, if x∉Nd(y), then GDOL(x,Nd(y))=N.
Proof. ∀Nd(y)∈CNd, ∑NNOk_(Nd(y))⊆Nd(y). If x∉Nd(y), then x∉∑NNOk_(Nd(y)). Hence, according to Definition 14, GDOL(x,Nd(y))=N. □
Remark 2. The GDOL in Definition 14 can be simplified as (GDOL)∗={GDOL(x,Nd(x))|x∈U}.
In fact, by Proposition 5, for every x∈U and Nd(y)∈CNd with x∉Nd(y), GDPL(x,Nd(y))=N. Then, the GDOL in Theorem 5 can be changed into (GDOL)∗.
Definition 15. Assume that (U,N,Nd) is a GNDIS, whose GNOL-D matrix is (GDOL)∗={GDOL(x,Nd(x))|x∈U}. Define f((GDOL)∗)=∧{∨GDOL(x,Nd(x))|x∈U}.
Theorem 6. Let H={N1,N2,⋯,Nk}⊆N. H∈RedOL(N)⇔N1∧N2∧⋯∧Nk is a prime implicant of f((GDOL)∗).
Proof. It can be obtained by Definition 15. □
By means of Theorem 6, all GNOL-reducts of a GNDIS can be obtained by f((GDOL)∗). An algorithm for calculating all the GNOL-reducts of a GNDIS is presented as Algorithm 2. The total time complexity of Algorithm 2 is O(|U|2|N|+∏GD∈(GDOL)∗|GD|). Example 5 is employed to state the calculation process.
Example 5. Continued from Example 1(1). We have that ∑NNOk_({x1,x2,x3})={x1,x2,x3}, ∑NNOk_({x4,x5})={x5}, ∑NNOk_({x6})=∅. According to Definition 14, we get that
and
By Definition 15, f((GDOL)∗)=(N2∨N4)∧(N4)∧(N3)∧(N1∨N2∨N3∨N4∨N5)=N3∧N4.
Hence, {N3,N4} is the GNOL-reduct. We also get that CoreOL(N)={N3,N4}.
Now, we construct a discernibility matrix to calculate GNOU-reducts.
Definition 16. Let (U,N,Nd) be a GNDIS, where N={N1,N2,⋯,Nm}. ∀x∈U,Nd(y)∈CNd, define
GDOU={GDOU(x,Nd(y))|x∈U,Nd(y)∈CNd} is called a GNOU-discernibility matrix (GNOU-D matrix) of (U,N,Nd).
It is easy to get that GDOU(x,Nd(y))≠∅ for all x∈U,Nd(y)∈CNd.
Theorem 7. Given a GNDIS (U,N,Nd), let H⊆N and Nk∈N, then
(1) H∈ConsOU(N)⇔H∩GDOU(x,Nd(y))≠∅ for all GDOU(x,Nd(y))∈GDOU.
(2) H∈RedOU(N)⇔H∩GDOU(x,Nd(y))≠∅ for all GDOU(x,Nd(y))≠∅, and for any H0⊂H, there exists a GDOU(x,Nd(y))∈GDOU such that GDOU(x,Nd(y))∩H0=∅.
(3) Nk∈CoreOU(N) ⇔∃x∈U,Nd(y)∈CNd, GDOU(x,Nd(y))={Nk}.
Proof. (1) "⇒". ∀x∈U, Nd(y)∈CNd, if x∉¯∑NNOk(Nd(y)), we get that x∉¯∑HNOk(Nd(y)). Then, ∃Nk∈H, Nk(x)∩Nd(y)=∅. It implies that Nk∈GDOU(x,Nd(y)). Therefore, H∩GDOU(x,Nd(y))≠∅. If x∈¯∑NNOk(Nd(y)), then GDOU(x,Nd(y))=N. It is verified that H∩GDOU(x,Nd(y))≠∅.
"⇐". ∀y∈U, by Proposition 2(6), ¯∑NNOk(Nd(y))⊆¯∑HNOk(Nd(y)). For any x∉¯∑NNOk(Nd(y)), GDOU(x,Nd(y))≠∅. Due to H∩GDOU(x,Nd(y))≠∅, let Nk∈H∩GDOU(x,Nd(y)). Then, Nk(x)∩Nd(y)=∅. Thus, x∉¯∑HNOk(Nd(y)). It follows that ¯∑HNOk(Nd(y))⊆¯∑NNOk(Nd(y)). We conclude that ¯∑HNOk(Nd(y))=¯∑NNOk(Nd(y)).
(2) It is easy to obtain (2) by (1).
(3) By reference to the proof of (3) in Theorem 1. □
Definition 17. Let (U,N,Nd) be a GNDIS, whose GNOU-D matrix is GDOU={GDOU(x,Nd(y))|x∈U,Nd(y)∈CNd}. Define f(GDOU)=∧{∨GDOU(x,Nd(y))|GDOU(x,Nd(y))∈GDOU}.
Theorem 8. Let H={N1,N2,⋯,Nk}⊆N. H∈RedOU(N)⇔N1∧N2∧⋯∧Nk is a prime implicant of f(GDOU).
Proof. It is trivial based on Definition 17. □
By Theorem 8, all the GNOU-reducts can be obtained by the conjunctive and disjunctive operations of GDOU.
Example 6. Continued from Example 2. We have that ¯∑NNOk({x1,x2,x3})={x1,x2,x3,x4,x6}, ¯∑NNOk({x4,x5})={x4,x5}, ¯∑NNOk({x6})={x6}. By Definition 16, we deduce that
Then, f(GDOU)=(N2∨N3∨N4∨N5)∧(N4)∧(N1∨N2∨N4)∧(N3)∧(N1∨N2∨N3∨N4∨N5)=N3∧N4.
It follows from Theorem 8 that \{N_{3}, N_{4}\} is the one and only one GNOU-reduct.
Remark 3. (1) There is no necessary association between the GNPL-reduct and GNOL-reduct.
From Examples 2 and 5, \{N_{5}\} is a GNPL-reduct. However, \{N_{5}\} is not a GNOL-reduct.
Continued from Example 1(1). A Pawlak neighborhood operator N_{d}:U\rightarrow P(U) is defined by: N_{d}(x_{1}) = N_{d}(x_{6}) = \{x_{1}, x_{6}\} , N_{d}(x_{2}) = N_{d}(x_{3}) = N_{d}(x_{4}) = N_{d}(x_{5}) = \{x_{2}, x_{3} , x_{4}, x_{5}\} . Then, we get a GNDIS (U, \mathcal{N}, N_{d}) . By Definition 7, \underline{\sum\limits_{\mathcal{N}}N_{k}^{O}}(\{x_{1}, x_{6}\}) = \{x_{6}\} , \underline{\sum\limits_{\mathcal{N}}N_{k}^{O}}(\{x_{2}, x_{3}, x_{4}, x_{5}\}) = \{x_{2}, x_{3}, x_{5}\} . Let \mathcal{H} = \{N_{2}\} . Then \underline{\sum\limits_{\mathcal{N}}N_{k}^{O}}(N_{d}(x_{i})) = \underline{\sum\limits_{\mathcal{H}}N_{k}^{O}}(N_{d}(x_{i}))(i = 1, 2, \cdots, 6) , which follows that \mathcal{H} is a GNOL-reduct. Since \underline{\sum\limits_{\mathcal{N}}N_{k}^{P}}(\{x_{2}, x_{3}, x_{4}, x_{5}\}) = \{x_{2}, x_{5}\} and \underline{\sum\limits_{\mathcal{N}}N_{k}^{P}}(\{x_{2}, x_{3}, x_{4}, x_{5}\}) = \{x_{2}, x_{3}, x_{5}\} , \mathcal{H} is not a GNPL-reduct.
(2) There is no necessary association between the GNPU-reduct and GNOU-reduct.
By Examples 3 and 6, \{N_{1}, N_{3}\} is a GNPU-reduct but not a GNOU-reduct, and \{N_{3}, N_{4}\} is a GNOU-reduct instead of a GNPU-reduct.
4.
Relationships between the multi-granulation reduction of DMSs and that of GNDISs
In a DMS (U, \mathcal{A}, d) with \mathcal{A} = \{A_{k}\subseteq A|k\in\mathbb{Z}, 1\leq k \leq m\} and U/R_{d} = \{[y]_{d}|y\in U\} , define a mapping N_{k}:U\rightarrow \mathcal{P}(U) by N_{k}(x) = [x]_{A_{k}} for all x\in U \; (k = 1, 2, \cdots, m) and a mapping N_{d}:U\rightarrow \mathcal{P}(U) by N_{d}(y) = [y]_{d} for all y\in U . Thus, N_{k}(k = 1, 2, \cdots, m) is a reflexive neighborhood operator on U and N_{d} is a Pawlak neighborhood operator, and we get a GNDIS (U, \mathcal{N}^{C}, N_{d}) with \mathcal{N}^{C} = \{N_{1}, N_{2}, \; \cdots, N_{m}\} , which is a GNDIS induced by the DMS (U, \mathcal{A}, d) . Define a mapping I:\mathcal{A}\rightarrow \mathcal{N}^{C} by I(A_{k}) = N_{k} for all A_{k}\in \mathcal{A} . From the definition of \mathcal{N}^{C} , it is easy to get that I is a bijection.
Proposition 6. Suppose that (U, \mathcal{A}, d) is a DMS and \mathcal{A} = \{A_{k}\subseteq A|k\in\mathbb{Z}, 1\leq k \leq m\} , which induces the GNDIS (U, \mathcal{N}^{C}, N_{d}) with \mathcal{N}^{C} = \{N_{1}, N_{2}, \; \cdots, N_{m}\} . Then, for each X\subseteq U ,
\underline{\sum\limits_{\mathcal{A}} A_{k}^{P}}(X) = \underline{\sum\limits_{\mathcal{N}^{C}}N_{k}^{P}}(X) , \overline{\sum\limits_{\mathcal{A}} A_{k}^{P}}(X) = \overline{\sum\limits_{\mathcal{N}^{C}} N_{k}^{P}}(X) ,
\underline{\sum\limits_{\mathcal{A}} A_{k}^{O}}(X) = \underline{\sum\limits_{\mathcal{N}^{C}} N_{k}^{O}}(X) , \overline{\sum\limits_{\mathcal{A}} A_{k}^{O}}(X) = \overline{\sum\limits_{\mathcal{N}^{C}} N_{k}^{O}}(X) .
Proof. Since N_{k}(x) = [x]_{A_{k}} for all x\in U \; (k = 1, 2, \cdots, m) , it is directly according to Definitions 6 and 7, Definitions 1 and 2. □
From Proposition 6, the generalized neighborhood pessimistic rough set model in Definition 6 is a general model of the pessimistic multi-granulation rough set model defined in [14], and the generalized neighborhood optimistic approximations in Definition 7 is an expansion of the optimistic multi-granulation approximations proposed in [18].
4.1. Pessimistic multi-granulation reduction of DMSs
Pessimistic multi-granulation reduction of DMSs are explored in [14,18,21].
Definition 18. [14,18,21] Assume that (U, \mathcal{A}, d) is a DMS, where \mathcal{A} = \{A_{k}\subseteq A|k\in\mathbb{Z}, 1\leq k \leq m\} . Let \mathcal{H}\subseteq \mathcal{A} and \mathcal{H}\neq \emptyset .
(1) \mathcal{H} is called a complete pessimistic lower consistent set (CPL-consistent set) if \underline{\sum\limits_{\mathcal{A}} A_{k}^{P}}([y]_{d}) = \underline{\sum\limits_{\mathcal{H}} A_{k}^{P}}([y]_{d}) for all y\in U . Denote the family of all CPL-consistent sets by Cons_{L}^{P}(\mathcal{A}) . Moreover, if \mathcal{H}\in Cons_{L}^{P}(\mathcal{A}) , and \mathcal{H}^{\prime}\not\in Cons_{L}^{P}(\mathcal{A}) whenever \mathcal{H}^{\prime}\subset\mathcal{H} , then \mathcal{H} is a CPL-reduct of (U, \mathcal{A}, d) . Denote the family of all CPL-reducts of (U, \mathcal{A}, d) by Red_{L}^{P}(\mathcal{A}) , and Core_{L}^{P}(\mathcal{A}) = \bigcap_{\mathcal{H}\in Red_{L}^{P}(\mathcal{A})}\mathcal{H} is said to be a CPL-core.
(2) \mathcal{H} is called a complete pessimistic upper consistent set (CPU-consistent set) if \overline{\sum\limits_{\mathcal{A}} A_{k}^{P}}([y]_{d}) = \overline{\sum\limits_{\mathcal{H}} A_{k}^{P}}([y]_{d}) for all y\in U . Denote the family of all CPU-consistent sets by Cons_{U}^{P}(\mathcal{A}) . Moreover, if \mathcal{H}\in Cons_{U}^{P}(\mathcal{A}) , and \mathcal{H}^{\prime}\not\in Cons_{U}^{P}(\mathcal{A}) whenever \mathcal{H}^{\prime}\subset\mathcal{H} , then \mathcal{H} is a CPU-reduct of (U, \mathcal{A}, d) . Denote the family of all CPU-reducts of (U, \mathcal{A}, d) by Red_{U}^{P}(\mathcal{A}) , and Core_{U}^{P}(\mathcal{A}) = \bigcap_{\mathcal{H}\in Red_{U}^{P}(\mathcal{A})}\mathcal{H} is said to be a CPU-core.
The relationships between the pessimistic multi-granulation reduction of (U, \mathcal{A}, d) and the pessimistic multi-granulation reduction of the GNDIS (U, \mathcal{N}^{C}, N_{d}) induced by (U, \mathcal{A}, d) are presented as follows:
Theorem 9. Let (U, \mathcal{A}, d) be a DMS with \mathcal{A} = \{A_{k}\subseteq A|k\in\mathbb{Z}, 1\leq k \leq m\} , which induces the GNDIS (U, \mathcal{N}^{C}, N_{d}) with \mathcal{N}^{C} = \{N_{1}, N_{2}, \; \cdots, N_{m}\} . Then, for \mathcal{H}\subseteq \mathcal{A} , A_{k} \in \mathcal{A} ,
(1) \mathcal{H}\in Cons_{L}^{P}(\mathcal{A}) \; \Leftrightarrow \; I(\mathcal{H}) \in Cons_{L}^{P}(\mathcal{N}^{C}) .
(2) \mathcal{H}\in Red_{L}^{P}(\mathcal{A}) \; \Leftrightarrow \; I(\mathcal{H})\in Red_{L}^{P}(\mathcal{N}^{C}) .
(3) A_{k} \in Core_{L}^{P}(\mathcal{A}) \; \Leftrightarrow \; I(A_{k})\in Core_{L}^{P}(\mathcal{N}^{C}) .
(4) \mathcal{H}\in Cons_{U}^{P}(\mathcal{A}) \; \Leftrightarrow \; I(\mathcal{H}) \in Cons_{U}^{P}(\mathcal{N}^{C}) .
(5) \mathcal{H}\in Red_{U}^{P}(\mathcal{A}) \; \Leftrightarrow \; I(\mathcal{H})\in Red_{U}^{P}(\mathcal{N}^{C}) .
(6) A_{k} \in Core_{U}^{P}(\mathcal{A}) \; \Leftrightarrow \; I(A_{k})\in Core_{U}^{P}(\mathcal{N}^{C}) .
Proof. (1) Due to Definitions 8 and 18, and Proposition 6,
\mathcal{H}\in Cons_{L}^{P}(\mathcal{A}) \; \Leftrightarrow \; \underline{\sum\limits_{\mathcal{A}} A_{k}^{P}}([y]_{d}) = \underline{\sum\limits_{\mathcal{H}} A_{k}^{P}}([y]_{d}) for all y\in U
\quad \Leftrightarrow \; \underline{\sum\limits_{\mathcal{N}^{C}}N_{k}^{P}}(N_{d}(y)) = \underline{\sum\limits_{I(\mathcal{H})} N_{k}^{P}}(N_{d}(y)) for all y\in U
\quad \Leftrightarrow \; I(\mathcal{H}) \in Cons_{L}^{P}(\mathcal{N}^{C}) .
(2) and (3). According to (1), Definitions 8 and 18, the conclusions are obtained.
(4)–(6). Similar to the proof of (1)–(3), the conclusions can be obtained by Definitions 8 and 18, and Proposition 6. □
To characterize the knowledge reduction of DMSs, discernibility matrices are designed by Tan el al. [21]. For any \mathcal{H}\subseteq \mathcal{A} , define the decision function by f_{\mathcal{H}}(x_{i}) = \{d(x_{j})|x_{j}\in N_{\mathcal{H}}(x_{i})\} . For each x\in U , define
\mathcal{P} = \{P(x)|x\in U\} is called a CPL-discernibility matrix. For any (x, y)\in U\times U , define
\mathcal{Q} = \{Q(x, y)|x\in U\} is called a CPU-discernibility matrix.
Due to Definitions 9 and 11, and Theorems 1 and 3, we obtain
Corollary 1. [21] For any \mathcal{H}\subseteq \mathcal{A} , A_{k}\in \mathcal{A} ,
(1) \mathcal{H}\in Cons_{L}^{P}(\mathcal{A}) \; \Leftrightarrow \; \mathcal{H}\cap P(x)\neq \emptyset for all x\in U .
(2) \mathcal{H}\in Red_{L}^{P}(\mathcal{A}) \; \Leftrightarrow \; \mathcal{H}\cap P(x)\neq \emptyset for all x\in U , and for every \mathcal{H}_{0}\subset \mathcal{H} , there exists an x\in U such that P(x)\cap \mathcal{H}_{0} = \emptyset .
(3) A_{k}\in Core_{L}^{P}(\mathcal{A}) \; \Leftrightarrow \; \exists x\in U , P(x) = \{A_{k}\} .
(4) \mathcal{H}\in Cons_{U}^{P}(\mathcal{A}) \; \Leftrightarrow \; \mathcal{H}\cap Q(x, y)\neq \emptyset for all Q(x, y)\neq \emptyset .
(5) \mathcal{H}\in Red_{U}^{P}(\mathcal{A}) \; \Leftrightarrow \; \mathcal{H}\cap Q(x, y)\neq \emptyset for all Q(x, y)\neq \emptyset , and for every \mathcal{H}_{0}\subset \mathcal{H} , there exists a Q(x, y)\in \mathcal{Q} such that Q(x, y)\cap \mathcal{H}_{0} = \emptyset .
(6) A_{k}\in Core_{U}^{P}(\mathcal{A}) \; \Leftrightarrow there exist some (x, y)\in U\times U such that Q(x, y) = \{A_{k}\} .
Proof. (1)–(3). According to Theorem 9, \mathcal{H} is a CPL-consistent set (CPL-reduct) of (U, \mathcal{A}, d) \; \Leftrightarrow \; I(\mathcal{H}) is a GNPL-consistent set (GNPL-reduct) of (U, \mathcal{N}^{C}, N_{d}) . By Property 4 in [21], for any x\in U , \mathcal{H}\subseteq \mathcal{A} , |f_{\mathcal{H}}(x)| > 1 if N_{\mathcal{H}}(x)\not\subseteq [x]_{d} . Then, by Definition 9, I(P(x)) = GD_{L}^{P}(x, N_{d}(x)) for all x\in U .
According to Theorem 1, \mathcal{H}\in Cons_{L}^{P}(\mathcal{A}) \; \Leftrightarrow \; I(\mathcal{H}) \in Cons_{L}^{P}(\mathcal{N}^{C}) \; \Leftrightarrow \; I(\mathcal{H}) \cap GD_{L}^{P}(x, N_{d}(x))\neq\emptyset for each x\in U \; \Leftrightarrow \; \mathcal{H} \cap P(x)\neq\emptyset for each x\in U . Then, we get (1). We can obtain (2) and (3) similarly.
(4)–(6). By Property 4 in [21], for any x\in U , |f_{\mathcal{A}}(x)| > 1 \; \Leftrightarrow \; N_{\mathcal{A}}(x)\not\subseteq [x]_{d} \; \Leftrightarrow \; GN_{\mathcal{N}^{C}}(x)\not\subseteq N_{d}(x) .
For any x, y\in U, d(y)\in f_{\{A_{k}\}}(x) = \{d(z)|z\in [x]_{A_{k}}\}
then \{A_{k}\in \mathcal{A}|d(y)\in f_{\{A_{k}\}}(x)\} = \{A_{k}\in \mathcal{A}|[x]_{A_{k}}\cap [y]_{d}\neq\emptyset\} . Hence,
For any x, y\in U , there are four cases. (a) GN_{\mathcal{N}^{C}}(x)\not\subseteq N_{d}(x) and GN_{\mathcal{N}^{C}}(x)\cap N_{d}(y)\neq \emptyset , that is, N_{\mathcal{A}}(x)\not\subseteq [x]_{d} and N_{\mathcal{A}}(x)\cap [y]_{d}\neq \emptyset . By Definition 11, I(Q(y, x)) = GD_{U}^{P}(x, N_{d}(y)) . (b) GN_{\mathcal{N}^{C}}(x)\not\subseteq N_{d}(x) and GN_{\mathcal{N}^{C}}(x)\cap N_{d}(y) = \emptyset , namely, N_{\mathcal{A}}(x)\not\subseteq [x]_{d} and N_{\mathcal{A}}(x)\cap [y]_{d} = \emptyset . From Definition 11, I(Q(y, x)) = \emptyset = GD_{U}^{P}(x, N_{d}(y)) . (c) GN_{\mathcal{N}^{C}}(x)\subseteq N_{d}(x) and GN_{\mathcal{N}^{C}}(x)\cap N_{d}(y)\neq \emptyset , that is, N_{\mathcal{A}}(x)\subseteq [x]_{d} and N_{\mathcal{A}}(x)\cap [y]_{d}\neq \emptyset . Then N_{d}(x)\cap N_{d}(y)\neq \emptyset , which follows that N_{d}(x) = N_{d}(y) . According to Definition 11, I(Q(y, x)) = I(\mathcal{A}) = \mathcal{N}^{C} = GD_{U}^{P}(x, N_{d}(x)) = GD_{U}^{P}(x, N_{d}(y)) . (d) GN_{\mathcal{N}^{C}}(x)\subseteq N_{d}(x) and GN_{\mathcal{N}^{C}}(x)\cap N_{d}(y) = \emptyset , i.e., N_{\mathcal{A}}(x)\subseteq [x]_{d} and N_{\mathcal{A}}(x)\cap [y]_{d} = \emptyset . By Definition 11, I(Q(y, x)) = I(\mathcal{A}) = \mathcal{N}^{C} and GD_{U}^{P}(x, N_{d}(y)) = \emptyset . In conclusion, \{I(Q(y, x))|x, y\in U\} = \{GD_{U}^{P}(x, N_{d}(y))|x, y\in U\} .
Due to Theorem 3, \mathcal{H}\in Cons_{U}^{P}(\mathcal{A}) \; \Leftrightarrow \; I(\mathcal{H}) \in Cons_{U}^{P}(\mathcal{N}^{C}) \; \Leftrightarrow \; I(\mathcal{H}) \cap GD_{U}^{P}(x, N_{d}(y))\neq\emptyset for each GD_{U}^{P}(x, N_{d}(y))\neq\emptyset \; \Leftrightarrow \; I(\mathcal{H}) \cap I(Q(x, y))\neq\emptyset for each I(Q(x, y))\neq\emptyset \; \Leftrightarrow \; \mathcal{H} \cap Q(x, y)\neq\emptyset for each Q(x, y)\neq\emptyset . Hence, (4) is found, and (5) and (6) are also obtained by Theorem 3. □
Remark 4. Let (U, \mathcal{A}, d) be a DMS with \mathcal{A} = \{A_{k}\subseteq A|k\in\mathbb{Z}, 1\leq k \leq m\} , which induces a GNDIS (U, \mathcal{N}^{C}, N_{d}) with \mathcal{N}^{C} = \{N_{1}, N_{2}, \; \cdots, N_{m}\} . From the proof of Corollary 1, \{I(Q(y, x))|x, y\in U\} = \{GD_{U}^{P}(x, N_{d}(y))|x\in U, N_{d}(y)\in C_{d}\} . However, the matrix \{GD_{U}^{P}(x, N_{d}(y))|x\in U, N_{d}(y)\in C_{d}\} merges the same elements and has more empty sets in comparison with \{I(Q(y, x))|x, y\in U\} .
4.2. Optimistic multi-granulation reduction of DMSs
Optimistic multi-granulation reduction of DMSs is also discussed in [14,18,21].
Definition 19. [14,18,21] Assume that (U, \mathcal{A}, d) is a DMS, where \mathcal{A} = \{A_{k}\subseteq A|k\in\mathbb{Z}, 1\leq k \leq m\} . Let \mathcal{H}\subseteq \mathcal{A} and \mathcal{H}\neq \emptyset .
(1) \mathcal{H} is called a complete optimistic lower consistent set (COL-consistent set) if \underline{\sum\limits_{\mathcal{A}} A_{k}^{O}}([y]_{d}) = \underline{\sum\limits_{\mathcal{H}} A_{k}^{O}}([y]_{d}) for all y\in U . Denote the family of all COL-consistent sets by Cons_{L}^{O}(\mathcal{A}) . Moreover, if \mathcal{H}\in Cons_{L}^{O}(\mathcal{A}) , and \mathcal{H}^{\prime}\not\in Cons_{L}^{O}(\mathcal{A}) whenever \mathcal{H}^{\prime}\subset\mathcal{H} , then \mathcal{H} is a COL-reduct of (U, \mathcal{A}, d) . Denote the family of all COL-reducts of (U, \mathcal{A}, d) by Red_{L}^{O}(\mathcal{A}) , and Core_{L}^{O}(\mathcal{A}) = \bigcap_{\mathcal{H}\in Red_{L}^{O}(\mathcal{A})}\mathcal{H} is called a COL-core.
(2) \mathcal{H} is called a complete optimistic upper consistent set (COU-consistent set) if \overline{\sum\limits_{\mathcal{A}} A_{k}^{O}}([y]_{d}) = \overline{\sum\limits_{\mathcal{H}} A_{k}^{O}}([y]_{d}) for all y\in U . Denote the family of all COU-consistent sets by Cons_{U}^{O}(\mathcal{A}) . Moreover, if \mathcal{H}\in Cons_{U}^{O}(\mathcal{A}) , and \mathcal{H}^{\prime}\not\in Cons_{U}^{O}(\mathcal{A}) whenever \mathcal{H}^{\prime}\subset\mathcal{H} , then \mathcal{H} is a COU-reduct of (U, \mathcal{A}, d) . Denote the family of all COU-reducts of (U, \mathcal{A}, d) by Red_{U}^{O}(\mathcal{A}) , and Core_{U}^{O}(\mathcal{A}) = \bigcap_{\mathcal{H}\in Red_{U}^{O}(\mathcal{A})}\mathcal{H} is called a COU-core.
The optimistic multi-granulation reduction of (U, \mathcal{A}, d) is closely associated with the optimistic multi-granulation reduction of the GNDIS (U, \mathcal{N}^{C}, N_{d}) induced by (U, \mathcal{A}, d) .
Theorem 10. Let (U, \mathcal{A}, d) be a DMS with \mathcal{A} = \{A_{k}\subseteq A|k\in\mathbb{Z}, 1\leq k \leq m\} , which induces the GNDIS (U, \mathcal{N}^{C}, N_{d}) with \mathcal{N}^{C} = \{N_{1}, N_{2}, \; \cdots, N_{m}\} . Then, for \mathcal{H}\subseteq \mathcal{A} , A_{k} \in \mathcal{A} ,
(1) \mathcal{H}\in Cons_{L}^{O}(\mathcal{A}) \; \Leftrightarrow \; I(\mathcal{H}) \in Cons_{L}^{O}(\mathcal{N}^{C}) .
(2) \mathcal{H}\in Red_{L}^{O}(\mathcal{A}) \; \Leftrightarrow \; I(\mathcal{H})\in Red_{L}^{O}(\mathcal{N}^{C}) .
(3) A_{k} \in Core_{L}^{O}(\mathcal{A}) \; \Leftrightarrow \; I(A_{k})\in Core_{L}^{O}(\mathcal{N}^{C}) .
(4) \mathcal{H}\in Cons_{U}^{O}(\mathcal{A}) \; \Leftrightarrow \; I(\mathcal{H})\in Cons_{U}^{O}(\mathcal{N}^{C}) .
(5) \mathcal{H}\in Red_{U}^{O}(\mathcal{A}) \; \Leftrightarrow \; I(\mathcal{H})\in Red_{U}^{O}(\mathcal{N}^{C}) .
(6) A_{k} \in Core_{U}^{O}(\mathcal{A}) \; \Leftrightarrow \; I(A_{k})\in Core_{U}^{O}(\mathcal{N}^{C}) .
Proof. (1) By Definitions 13 and 19, and Proposition 6,
\mathcal{H}\in Cons_{L}^{O}(\mathcal{A}) \; \Leftrightarrow \; \underline{\sum\limits_{\mathcal{A}} A_{k}^{O}}([y]_{d}) = \underline{\sum\limits_{\mathcal{H}} A_{k}^{O}}([y]_{d}) for all y\in U
(2) and (3). Due to (1) and Definitions 13 and 19, the conclusions are obtained.
(4)–(6). With reference to the proof of (1)–(3), the conclusions can be proved by Definitions 13 and 19, and Proposition 6. □
In [21], Tan et al. also presented a discenibility matrix to get COU-reducts. However, the optimistic lower reduction was not characterized by discenibility matrices in [21]. We present a discernibility matrix to compute COL-reducts.
Definition 20. Let (U, \mathcal{A}, d) be a DMS. For each x\in U , define
\mathcal{MD} = \{MD(x)|x\in U\} is called a COL-discernibility matrix.
For any x, y\in U , define G(x, y) = \{A_{k}\in \mathcal{A}|d(y)\not\in f_{\{A_{k}\}}(x)\} [21]. \mathcal{G} = \{G(x, y)|(x, y)\in U\times U\} is called a COU-discernibility matrix.
Remark 5. From the proof of Corollary 1, d(y)\not\in f_{\{A_{k}\}}(x)\Leftrightarrow [x]_{A_{k}}\cap [y]_{d} = \emptyset . If \overline{\sum\limits_{\mathcal{A}} A_{k}^{O}}([y]_{d}) = U for all y\in U , then [x]_{A_{k}}\cap [y]_{d}\neq\emptyset for all x\in U and A_{k}\in \mathcal{A} . It follows that G(x, y) = \emptyset for all x, y\in U . Hence, we cannot get the COU-reducts from \mathcal{G} . Then, G(x, y) is defined by
Corollary 2. For any \mathcal{H}\subseteq \mathcal{A} , A_{k}\in \mathcal{A} ,
(1) \mathcal{H}\in Cons_{L}^{O}(\mathcal{A}) \; \Leftrightarrow \; \mathcal{H}\cap MD(x)\neq \emptyset for all MD(x)\in \mathcal{MD} .
(2) \mathcal{H}\in Red_{L}^{O}(\mathcal{A}) \; \Leftrightarrow \; \mathcal{H}\cap MD(x)\neq \emptyset for all MD(x)\in \mathcal{MD} , and for every \mathcal{H}_{0}\subset \mathcal{H} , there exists an MD(x)\in \mathcal{MD} such that MD(x)\cap \mathcal{H}_{0} = \emptyset .
(3) A_{k}\in Core _{L}^{O}(\mathcal{A}) \; \Leftrightarrow \; \exists x\in U , MD(x) = \{A_{k}\} .
(4) \mathcal{H}\in Cons_{U}^{O}(\mathcal{A}) \; \Leftrightarrow \; \mathcal{H}\cap G(x, y)\neq \emptyset for all G(x, y)\in \mathcal{G} [21].
(5) \mathcal{H}\in Red_{U}^{O}(\mathcal{A}) \; \Leftrightarrow \; \mathcal{H}\cap G(x, y)\neq \emptyset for all G(x, y)\in\mathcal{G} , and for every \mathcal{H}_{0}\subset \mathcal{H} , there exists a G(x, y)\in\mathcal{G} such that G(x, y)\cap \mathcal{H}_{0} = \emptyset [21].
(6) A_{k}\in Core _{U}^{O}(\mathcal{A}) \; \Leftrightarrow there exists a (x, y)\in U\times U such that G(x, y) = \{A_{k}\} [21].
Proof. (1)–(3). By Theorem 10, \mathcal{H} is a COL-consistent set (or COL-reduct) of (U, \mathcal{A}, d) \; \Leftrightarrow \; I(\mathcal{H}) is a GNOL-consistent set (or GNOL-reduct) of (U, \mathcal{N}^{C}, N_{d}) . For any x\in U, A_{k}\in \mathcal{A} , |f_{\{A_{k}\}}(x)| = 1 if N_{\{A_{k}\}}(x)\subseteq [x]_{d} . It follows from Definitions 14 and 20 that I(MD(x)) = GD_{L}^{O}(x, N_{d}(x)) for all x\in U .
Due to Theorem 5, \mathcal{H}\in Cons_{L}^{O}(\mathcal{A}) \; \Leftrightarrow \; I(\mathcal{H}) \in Cons_{L}^{O}(\mathcal{N}^{C}) \; \Leftrightarrow \; I(\mathcal{H}) \cap GD_{L}^{O}(x, N_{d}(x))\neq\emptyset for every x\in U \; \Leftrightarrow \; \mathcal{H} \cap MD(x)\neq\emptyset for every MD(x)\in \mathcal{MD} . Hence (1) is obtained. (2) and (3) can be deduced from Theorem 5 analogously.
(4)–(6). According to Theorem 10, \mathcal{H} is a COU-consistent set (or COU-reduct) of (U, \mathcal{A}, d) \; \Leftrightarrow \; I(\mathcal{H}) is a GNOU-consistent set (or GNOU-reduct) of (U, \mathcal{N}^{C}, N_{d}) .
For any x, y\in U , if x\not\in \overline{\sum\limits_{\mathcal{A}} A_{k}^{O}}([y]_{d}) = \overline{\sum\limits_{\mathcal{N}^{C}} N_{k}^{O}}(N_{d}(y)) , then
If x\in \overline{\sum\limits_{\mathcal{A}} A_{k}^{O}}([y]_{d}) = \overline{\sum\limits_{\mathcal{N}^{C}} N_{k}^{O}}(N_{d}(y)) , I(G(x, y)) = \mathcal{N}^{C} = GD_{U}^{O}(x, N_{d}(y)) . We can conclude that I(G(x, y)) = GD_{U}^{O}(x, N_{d}(y)) for all x, y\in U .
By Theorem 7, \mathcal{H}\in Cons_{U}^{O}(\mathcal{A}) \; \Leftrightarrow \; I(\mathcal{H})\in Cons_{U}^{O}(\mathcal{N}^{C}) \; \Leftrightarrow \; I(\mathcal{H}) \cap GD_{U}^{O}(x, N_{d}(y))\neq\emptyset for all x, y\in U \; \Leftrightarrow \; I(\mathcal{H}) \cap I(G(x, y))\neq\emptyset for each G(x, y)\in \mathcal{G} \; \Leftrightarrow \; \mathcal{H} \cap G(x, y)\neq\emptyset for each G(x, y)\in \mathcal{G} . Hence, (4) is proved. (5) and (6) are also proved by Theorem 7. □
We can see that a DMS is a GNDIS. Furthermore, due to Definition 10 and Theorem 2, a CPL-reduct can be obtained by a prime implicant of f(\mathcal{P}) . According to Definition 12 and Theorem 4, a CPU-reduct can be obtained by a prime implicant of f(\mathcal{Q}) . By Definition 15 and Theorem 6, the COL-reducts can be found from the prime implicants of f(\mathcal{MD}) . By Definition 17 and Theorem 8, the COU-reducts can be found from the prime implicants of f(\mathcal{G}) .
Example 7. A DMS (U, \mathcal{A}, d) is present in Table 3, where \mathcal{A} = \{A_{1} = \{a_{1}, a_{2}, a_{3}\}, A_{2} = \{a_{4}, a_{5}\}, A_{3} = \{a_{6}, a_{7}\}, A_{4} = \{a_{8}, a_{9}, a_{10}\}\} . The granulars [x_{i}]_{A_{k}}, [x_{i}]_{d} , and N_{\mathcal{A}}(x_{i}) \; (i = 1, \cdots, 6; k = 1, \cdots, 4) are presented in Table 4. We obtain
By Corollary 1, Core_{U}^{P}(\mathcal{A}) = \{A_{3}, A_{4}\} , and \mathcal{A}_{0} = \{A_{3}, A_{4}\} is the one and only one CPU-reduct.
The DMS (U, \mathcal{A}, d) induces a GNDIS (U, \mathcal{N}^{C}, N_{d}) with \mathcal{N}^{C} = \{N_{1}, N_{2} , N_{3}, N_{4}\} , where N_{i}(x) = [x]_{A_{i}}(i = 1, 2, 3, 4) and N_{d}(x) = [x]_{d} for all x\in U . By Definition 11, the GNPU-D matrix of (U, \mathcal{N}^{C}, N_{d}) is
By Theorem 1, the GNPU-reduct of (U, \mathcal{N}^{C}, N_{d}) is \mathcal{N}_{0} = \{N_{3}, N_{4}\} , and Core_{L}^{P}(\mathcal{N}^{C}) = \{N_{3}, N_{4}\} . It is easy to get that I(Q(x_{j}, x_{i})) = GD_{U}^{P}(x_{i}, N_{d}(x_{j})) . Moreover, I(\mathcal{A}_{0}) = \mathcal{N}_{0} and I(Core_{U}^{P}(\mathcal{A})) = Core_{U}^{P}(\mathcal{N}^{C}) .
By Definition 2, \underline{\sum\limits_{\mathcal{A}} A_{k}^{O}}(\{x_{1}, x_{4}, x_{5}\}) = \{x_{1}, x_{4}\} , \underline{\sum\limits_{\mathcal{A}} A_{k}^{O}}(\{x_{2}, x_{3}\}) = \{x_{2}, x_{3}\} , \underline{\sum\limits_{\mathcal{A}} A_{k}^{O}}(\{x_{6}\}) = \emptyset . According to Definition 20, we get
Due to Corollary 2, Core _{L}^{O}(\mathcal{A}) = \{A_{4}\} , and \{A_{4}\} is the only COL-reduct of (U, \mathcal{A}, d) .
5.
Relationships between multi-granulation reduction of IDISs and that of GNDISs
In an IDIS (U, A, d) , \mathcal{A}^{I} = \{A_{k}\subseteq A|k\in\mathbb{Z}, 1\leq k \leq m\} , a mapping N_{k}:U\rightarrow \mathcal{P}(U) by N_{k}(x) = S_{A_{k}}(x) for all x\in U \; (k = 1, 2, \cdots, m) and a mapping N_{d}:U\rightarrow \mathcal{P}(U) by N_{d}(y) = [y]_{d} for all y\in U . Then, N_{k}(k = 1, 2, \cdots, m) is a reflexive neighborhood operator on U and N_{d} is a Pawlak neighborhood operator, and we get a GNDIS (U, \mathcal{N}^{I}, N_{d}) with \mathcal{N}^{I} = \{N_{1}, N_{2}, \; \cdots, N_{m}\} , which is a GNDIS induced by the IDIS (U, A, d) . Define a mapping I:\mathcal{A}^{I}\rightarrow \mathcal{N}^{I} by I(A_{k}) = N_{k} for all A_{k}\in \mathcal{A}^{I} . From the definition of \mathcal{N}^{I} , it is easy to get that I is a bijection.
Proposition 7. Suppose that (U, A, d) is an IDIS and \mathcal{A}^{I} = \{A_{k}\subseteq A|k\in\mathbb{Z}, 1\leq k \leq m\} , which induces the GNDIS (U, \mathcal{N}^{I}, N_{d}) with \mathcal{N}^{I} = \{N_{1}, N_{2}, \; \cdots, N_{m}\} . For each X\subseteq U ,
\underline{\sum\limits_{\mathcal{A}^{I}} A_{k}^{P}}(X) = \underline{\sum\limits_{\mathcal{N}^{I}}N_{k}^{P}}(X) , \overline{\sum\limits_{\mathcal{A}^{I}} A_{k}^{P}}(X) = \overline{\sum\limits_{\mathcal{N}^{I}} N_{k}^{P}}(X) ,
\underline{\sum\limits_{\mathcal{A}^{I}} A_{k}^{O}}(X) = \underline{\sum\limits_{\mathcal{N}^{I}} N_{k}^{O}}(X) , \overline{\sum\limits_{\mathcal{A}^{I}} A_{k}^{O}}(X) = \overline{\sum\limits_{\mathcal{N}^{I}} N_{k}^{O}}(X) .
Proof. It is directly by Definitions 3, 6 and 7. □
By Proposition 7, the generalized neighborhood multi-granulation rough set models are extension models of the multi-granulation rough set models proposed in [15].
5.1. Pessimistic multi-granulation reduction of IDISs
Pessimistic multi-granulation reduction of an IDIS was discussed by Qian et. al. [14,18,32].
Definition 21. [14,18,32] Given an IDIS (U, A, d) , let \mathcal{A}^{I} = \{A_{k}\subseteq A|k\in\mathbb{Z}, 1\leq k \leq m\} and \mathcal{H}\subseteq \mathcal{A}^{I} .
(1) \mathcal{A}^{I} is called an incomplete pessimistic lower consistent set (IPL-consistent set) if \underline{\sum\limits_{\mathcal{A}^{I}} A_{k}^{P}}([y]_{d}) = \underline{\sum\limits_{\mathcal{H}} A_{k}^{P}}([y]_{d}) for all y\in U . Denote the family of all IPL-consistent sets as Cons_{L}^{P}(\mathcal{A}^{I}) . Moreover, if \mathcal{H}\in Cons_{L}^{P}(\mathcal{A}^{I}) , and \mathcal{H}^{\prime}\not\in Cons_{L}^{P}(\mathcal{A}^{I}) whenever \mathcal{H}^{\prime}\subset\mathcal{H} , then \mathcal{H} is an IPL-reduct. Denote the family of all IPL-reducts of (U, A, d) by Red_{L}^{P}(\mathcal{A}^{I}) , and Core_{L}^{P}(\mathcal{A}^{I}) = \bigcap_{\mathcal{H}\in Red_{L}^{P}(\mathcal{A}^{I})}\mathcal{H} is said to be an IPL-core.
(2) \mathcal{A}^{I} is called an incomplete pessimistic upper consistent set (IPU-consistent set) if \overline{\sum\limits_{\mathcal{A}^{I}} A_{k}^{P}}([y]_{d}) = \overline{\sum\limits_{\mathcal{H}} A_{k}^{P}}([y]_{d}) for all y\in U . Denote the family of all IPU-consistent sets by Cons_{U}^{P}(\mathcal{A}^{I}) . Moreover, if \mathcal{H}\in Cons_{U}^{P}(\mathcal{A}^{I}) , and \mathcal{H}^{\prime}\not\in Cons_{U}^{P}(\mathcal{A}^{I}) whenever \mathcal{H}^{\prime}\subset\mathcal{H} , then \mathcal{H} is an IPU-reduct. Denote the family of all IPU-reducts of (U, A, d) by Red_{U}^{P}(\mathcal{A}^{I}) , and Core_{U}^{P}(\mathcal{A}^{I}) = \bigcap_{\mathcal{H}\in Red_{U}^{P}(\mathcal{A}^{I})}\mathcal{H} is said to be an IPU-core.
The multi-granulation reduction of an IDIS (U, A, d) can be changed into the multi-granulation reduction of the GNDIS (U, \mathcal{N}^{I}, N_{d}) induced by (U, A, d) .
Theorem 11. Consider an IDIS (U, A, d) with \mathcal{A}^{I} = \{A_{k}\subseteq A|k\in\mathbb{Z}, 1\leq k \leq m\} , which induces a GNDIS (U, \mathcal{N}^{I}, N_{d}) with \mathcal{N}^{I} = \{N_{1}, N_{2}, \; \cdots, N_{m}\} . Then, for \mathcal{H}\subseteq \mathcal{A} , A_{k} \in \mathcal{A} ,
(1) \mathcal{H}\in Cons_{L}^{P}(\mathcal{A}^{I}) \; \Leftrightarrow \; I(\mathcal{H}) \in Cons_{L}^{P}(\mathcal{N}^{I}) .
(2) \mathcal{H}\in Red_{L}^{P}(\mathcal{A}^{I}) \; \Leftrightarrow \; I(\mathcal{H})\in Red_{L}^{P}(\mathcal{N}^{I}) .
(3) A_{k} \in Core_{L}^{P}(\mathcal{A}^{I}) \; \Leftrightarrow \; I(A_{k})\in Core_{L}^{P}(\mathcal{N}^{I}) .
(4) \mathcal{H}\in Cons_{U}^{P}(\mathcal{A}^{I}) \; \Leftrightarrow \; I(\mathcal{H}) \in Cons_{U}^{P}(\mathcal{N}^{I}) .
(5) \mathcal{H}\in Red_{U}^{P}(\mathcal{A}^{I}) \; \Leftrightarrow \; I(\mathcal{H})\in Red_{U}^{P}(\mathcal{N}^{I}) .
(6) A_{k} \in Core_{U}^{P}(\mathcal{A}^{I}) \; \Leftrightarrow \; I(A_{k})\in Core_{U}^{P}(\mathcal{N}^{I}) .
Proof. (1) Due to Definitions 8 and 21, and Proposition 7,
\mathcal{H}\in Cons_{L}^{P}(\mathcal{A}^{I}) \; \Leftrightarrow \; \underline{\sum\limits_{\mathcal{A}^{I}} A_{k}^{P}}([y]_{d}) = \underline{\sum\limits_{\mathcal{H}} A_{k}^{P}}([y]_{d}) for all y\in U
(2) and (3). By (1) and Definitions 8 and 21, the conclusions are proved.
(4)–(6). Similar to the proof of (1)–(3), the conclusions can be obtained by Definitions 13 and 19, and Proposition 6. □
Discernibility matrices were defined by Zhang et al. [32] to compute the IPL-reducts and IPU-reducts of an IDIS. For any \mathcal{H}\subseteq \mathcal{A}^{I} , define the decision function by h_{\mathcal{H}}(x_{i}) = \{d(x_{j})|x_{j}\in IN_{\mathcal{H}}(x_{i})\} . For any x\in U , define
\mathcal{IP} = \{IP(x)|x\in U\} is called an IPL-discernibility matrix. For (x, y)\in U\times U , define
\mathcal{IQ} = \{IQ(x, y)|(x, y)\in U\times U\} is called an IPU-discernibility matrix.
Remark 6. If |h_{\mathcal{A}^{I}}(x)| = 1 for each x\in U in an IDIS (U, A, d) with \mathcal{A}^{I} = \{A_{k}\subseteq A|k\in\mathbb{Z}, 1\leq k \leq m\} , then each A_{k}\in \mathcal{A}^{I} is an IPL-reduct. However, IP(x) = \emptyset for all x\in U . In the following, IP(x) is defined by
By Definition 9 and Theorem 1, as well as Definition 11 and Theorem 3, we obtain
Corollary 3. [32] Assume that (U, A, d) is an IDIS and \mathcal{A}^{I} = \{A_{k}\subseteq A|k\in\mathbb{Z}, 1\leq k \leq m\} . For any \mathcal{H}\subseteq \mathcal{A}^{I} , A_{k}\in\mathcal{A}^{I} ,
(1) \mathcal{H}\in Cons_{L}^{P}(\mathcal{A}^{I}) \; \Leftrightarrow \; \mathcal{H}\cap IP(x)\neq \emptyset for all IP(x)\in \mathcal{IP} .
(2) \mathcal{H}\in Red_{L}^{P}(\mathcal{A}^{I}) \; \Leftrightarrow \; \mathcal{H}\cap IP(x)\neq \emptyset for all IP(x)\in \mathcal{IP} , and for each \mathcal{H}_{0}\subset \mathcal{H} , there exists an IP(x)\in \mathcal{IP} such that IP(x)\cap \mathcal{H}_{0} = \emptyset .
(3) A_{k}\in Core _{L}^{P} ( \mathcal{A}^{I} ) \Leftrightarrow \; \exists x\in U , IP(x) = \{A_{k}\} .
(4) \mathcal{H}\in Cons_{U}^{P}(\mathcal{A}^{I}) \; \Leftrightarrow \; \mathcal{H}\cap IQ(x, y)\neq \emptyset for all IQ(x, y)\neq \emptyset .
(5) \mathcal{H}\in Red_{U}^{P}(\mathcal{A}^{I}) \; \Leftrightarrow \; \mathcal{H}\cap IQ(x, y)\neq \emptyset for all IQ(x, y)\neq \emptyset , and for each \mathcal{H}_{0}\subset \mathcal{H} , there exists an IQ(x, y) such that IQ(x, y)\cap \mathcal{H}_{0} = \emptyset .
(6) A_{k}\in Core _{U}^{P} ( \mathcal{A}^{I} ) \Leftrightarrow there exist some (x, y)\in U\times U such that IQ(x, y) = \{A_{k}\} .
Proof. (1)–(3). By Theorem 11, \mathcal{H} is an IPL-consistent set (or an IPL-reduct) of (U, A, d) \; \Leftrightarrow \; I(\mathcal{H}) is a GNPL-consistent set (or a GNPL-reduct) of (U, \mathcal{N}^{I}, N_{d}) . According to Property 3 in [32], for any x\in U , \mathcal{H}\subseteq \mathcal{A}^{I} , |h_{\mathcal{H}}(x)| > 1 if IN_{\mathcal{H}}(x)\not\subseteq [x]_{d} . Then, I(IP(x)) = GD_{L}^{P}(x, N_{d}(x)) .
By Theorem 1, \mathcal{H}\in Cons_{L}^{P}(\mathcal{A}^{I}) \; \Leftrightarrow \; I(\mathcal{H}) \in Cons_{L}^{P}(\mathcal{N}^{I}) \; \Leftrightarrow \; I(\mathcal{H}) \cap I(IP(x))\neq\emptyset for each x\in U \; \Leftrightarrow \; \mathcal{H} \cap IP(x)\neq\emptyset for each IP(x)\in \mathcal{IP} .
Similar to the proof of (1), we can get (2) and (3) by Theorem 1.
(4)–(6). According to Theorem 11, \mathcal{H} is an IPU-consistent set (or an IPU-reduct) of (U, A, d) \; \Leftrightarrow \; I(\mathcal{H}) is a GNPU-consistent set (or a GNPU-reduct) of (U, \mathcal{N}^{I}, N_{d}) .
For x, y\in U , d(y)\in h_{\{A_{k}\}}(x)\Leftrightarrow S_{A_{k}}(x)\cap [y]_{d}\neq\emptyset , and d(y)\in h_{\mathcal{A}^{I}}(x)\Leftrightarrow IN_{\mathcal{A}^{I}}(x)\cap [y]_{d}\neq\emptyset . Then,
According to Definition 11, I(IQ(x, y)) = GD_{U}^{P}(x, N_{d}(y)) . By Theorem 3, (4)–(6) hold. □
5.2. Optimistic multi-granulation reduction of IDISs
Optimistic multi-granulation reduction of IDIS was discussed by Qian et. al. [14,18,32].
Definition 22. [14,18,32] Given an IDIS (U, A, d) , let \mathcal{A}^{I} = \{A_{k}\subseteq A|k\in\mathbb{Z}, 1\leq k \leq m\} and \mathcal{H}\subseteq \mathcal{A}^{I} .
(1) \mathcal{A}^{I} is called an incomplete optimistic lower consistent set (IOL-consistent set) if \underline{\sum\limits_{\mathcal{A}^{I}} A_{k}^{O}}([y]_{d}) = \underline{\sum\limits_{\mathcal{H}} A_{k}^{O}}([y]_{d}) for all y\in U . Denote the family of all IOL-consistent sets as Cons_{L}^{O}(\mathcal{A}^{I}) . Moreover, if \mathcal{H}\in Cons_{L}^{O}(\mathcal{A}^{I}) , and \mathcal{H}^{\prime}\not\in Cons_{L}^{O}(\mathcal{A}^{I}) whenever \mathcal{H}^{\prime}\subset\mathcal{H} , then \mathcal{H} is an IOL-reduct. Denote the family of all IOL-reducts of (U, A, d) by Red_{L}^{O}(\mathcal{A}^{I}) , and Core_{L}^{O}(\mathcal{A}^{I}) = \bigcap_{\mathcal{H}\in Red_{L}^{O}(\mathcal{A}^{I})}\mathcal{H} is said to be an IOL-core.
(2) \mathcal{A}^{I} is called an incomplete optimistic upper consistent set (IOU-consistent set) if \overline{\sum\limits_{\mathcal{A}^{I}} A_{k}^{O}}([y]_{d}) = \overline{\sum\limits_{\mathcal{H}} A_{k}^{O}}([y]_{d}) for all x\in U . Denote the family of all IOU-consistent sets as Cons_{U}^{O}(\mathcal{A}^{I}) . Moreover, if \mathcal{H}\in Cons_{U}^{O}(\mathcal{A}^{I}) , and \mathcal{H}^{\prime}\not\in Cons_{U}^{O}(\mathcal{A}^{I}) whenever \mathcal{H}^{\prime}\subset\mathcal{H} , then \mathcal{H} is an IOU-reduct. Denote the family of all IOU-reducts of (U, A, d) by Red_{U}^{O}(\mathcal{A}^{I}) , and Core_{U}^{O}(\mathcal{A}^{I}) = \bigcap_{\mathcal{H}\in Red_{U}^{O}(\mathcal{A}^{I})}\mathcal{H} is said to be an IOU-core.
The multi-granulation reduction of an IDIS (U, A, d) can be changed into the multi-granulation reduction of the GNDIS (U, \mathcal{N}^{I}, N_{d}) induced by (U, A, d) .
Theorem 12. Consider an IDIS (U, A, d) with \mathcal{A}^{I} = \{A_{k}\subseteq A|k\in\mathbb{Z}, 1\leq k \leq m\} , which induces a GNDIS (U, \mathcal{N}^{I}, N_{d}) with \mathcal{N}^{I} = \{N_{1}, N_{2}, \; \cdots, N_{m}\} . Then, for \mathcal{H}\subseteq \mathcal{A} , A_{k} \in \mathcal{A} ,
(1) \mathcal{H}\in Cons_{L}^{O}(\mathcal{A}^{I}) \; \Leftrightarrow \; I(\mathcal{H}) \in Cons_{L}^{O}(\mathcal{N}^{I}) .
(2) \mathcal{H}\in Red_{L}^{O}(\mathcal{A}^{I}) \; \Leftrightarrow \; I(\mathcal{H})\in Red_{L}^{O}(\mathcal{N}^{I}) .
(3) A_{k} \in Core_{L}^{O}(\mathcal{A}^{I}) \; \Leftrightarrow \; I(A_{k})\in Core_{L}^{O}(\mathcal{N}^{I}) .
(4) \mathcal{H}\in Cons_{U}^{O}(\mathcal{A}^{I}) \; \Leftrightarrow \; I(\mathcal{H}) \in Cons_{U}^{O}(\mathcal{N}^{I}) .
(5) \mathcal{H}\in Red_{U}^{O}(\mathcal{A}^{I}) \; \Leftrightarrow \; I(\mathcal{H})\in Red_{U}^{O}(\mathcal{N}^{I}) .
(6) A_{k} \in Core_{U}^{O}(\mathcal{A}^{I}) \; \Leftrightarrow \; I(A_{k})\in Core_{U}^{O}(\mathcal{N}^{I}) .
Proof. It is verified according to Proposition 7, Definitions 13 and 22. □
However, the optimistic multi-granulation reduction of IDISs was not considered in [32]. In the following, we present two discernibility matrices to characterize the IOL-reducts and IOU-reducts of an IDIS.
Definition 23. Given an IDIS (U, A, d) , let \mathcal{A}^{I} = \{A_{k}\subseteq A|k\in\mathbb{Z}, 1\leq k \leq m\} . For any x\in U , define
\mathcal{ID}_{L}^{O} = \{ID_{L}^{O}(x, [x]_{d})|x\in U\} is called an IOL-discernibility matrix. For any x\in U, [y]_{d}\in U/R_{d} , define
\mathcal{ID}_{U}^{O} = \{ID_{U}^{O}(x, [y]_{d})|x\in U, [y]_{d}\in U/R_{d}\} is called an IOU-discernibility matrix.
Corollary 4. Suppose that (U, A, d) is an IDIS and \mathcal{A}^{I} = \{A_{k}\subseteq A|k\in\mathbb{Z}, 1\leq k \leq m\} . For any \mathcal{H}\subseteq \mathcal{A}^{I} , A_{k}\in \mathcal{A}^{I} ,
(1) \mathcal{H}\in Cons_{L}^{O}(\mathcal{A}^{I}) \; \Leftrightarrow \; \mathcal{H}\cap ID_{L}^{O}(x, [x]_{d})\neq \emptyset for all ID_{L}^{O}(x, [x]_{d})\in \mathcal{ID}_{L}^{O} .
(2) \mathcal{H}\in Red_{L}^{O}(\mathcal{A}^{I}) \; \Leftrightarrow \; \mathcal{H}\cap ID_{L}^{O}(x, [x]_{d})\neq \emptyset for all ID_{L}^{O}(x, [x]_{d})\in \mathcal{ID}_{L}^{O} , and for any \mathcal{H}_{0}\subset \mathcal{H} , there exists an ID_{L}^{O}(x, [x]_{d})\in \mathcal{ID}_{L}^{O} such that ID_{L}^{O}(x, [x]_{d})\cap \mathcal{H}_{0} = \emptyset .
(3) A_{k}\in Core _{L}^{O} ( \mathcal{A}^{I} ) \Leftrightarrow there is some x\in U such that ID_{L}^{O}(x, [x]_{d}) = \{A_{k}\} .
(4) \mathcal{H}\in Cons_{U}^{O}(\mathcal{A}^{I}) \; \Leftrightarrow \; \mathcal{H}\cap ID_{U}^{O}(x, [y]_{d})\neq \emptyset for any ID_{U}^{O}(x, [y]_{d})\in \mathcal{ID}_{U}^{O} .
(5) \mathcal{H}\in Red_{U}^{O}(\mathcal{A}^{I}) \; \Leftrightarrow \; \mathcal{H}\cap ID_{U}^{O}(x, [y]_{d})\neq \emptyset for any ID_{U}^{O}(x, [y]_{d})\in \mathcal{ID}_{U}^{O} , and for each \mathcal{H}_{0}\subset \mathcal{H} , there exists an ID_{U}^{O}(x, [y]_{d})\in \mathcal{ID}_{U}^{O} such that ID_{U}^{O}(x, y)\cap \mathcal{H}_{0} = \emptyset .
(6) A_{k}\in Core _{U}^{O} ( \mathcal{A}^{I} ) \Leftrightarrow there exist x\in U, [y]_{d}\in U/R_{d} such that ID_{U}^{O}(x, [y]_{d}) = \{A_{k}\} .
Proof. By Theorem 12, \mathcal{H} is an IOL-consistent set (or IOL-reduct, IOU-consistent set, IOU-reduct, respectively) of an IDIS (U, A, d) \; \Leftrightarrow \; I(\mathcal{H}) is a GNOL-consistent set (or GNOL-reduct, GNOU-consistent set, GNOU-reduct, respectively) of the GNDIS (U, \mathcal{N}^{I}, N_{d}) .
By Definitions 14 and 23, I(ID_{L}^{O}(x, [x]_{d})) = GD_{L}^{O}(x, N_{d}(x)) for all x\in U . From Remark 2 and Theorem 5, (1)–(3) are obtained.
Due to Definitions 16 and 23, I(ID_{U}^{O}(x, [y]_{d})) = GD_{U}^{O}(x, N_{d}(y)) for all [y]_{d}\in U/R_{d}, x\in U . According to Theorem 7, (4)–(6) hold. □
By Definition 10 and Theorem 2, an IPL-reduct can be obtained by a prime implicant of f(\mathcal{IP}) . According to Definition 12 and Theorem 4, an IPU-reduct can be obtained by a prime implicant of f(\mathcal{IQ}) . By Definition 15 and Theorem 6, the IOL-reducts can be found from the prime implicants of f((\mathcal{ID}_{L}^{O})^{\ast}) . Due to Definition 17 and Theorem 8, the IOU-reducts can be found from the prime implicants of f(\mathcal{ID}_{U}^{O}) . We employ an example to illustrate the calculation method mentioned above.
Example 8. An IDIS (U, A, d) is presented in Table 5, and \mathcal{A}^{I} = \{A_{1} = \{a_{1}, a_{2}, a_{3}\}, A_{2} = \{a_{4}, a_{5}\}, A_{3} = \{a_{6}, a_{7}\}, A_{4} = \{a_{8}, a_{9}, a_{10}\}\} . The granulars of elements are presented in Table 6.
We get that U/R_{d} = \{\{x_{1}, x_{5}\}, \{x_{2}, x_{3}, x_{4}\}, \{x_{6}\}\} . By Definition 3, \underline{\sum\limits_{\mathcal{A}^{I}} A_{k}^{O}}(\{x_{1}, x_{5}\}) = \{x_{1}\} , \underline{\sum\limits_{\mathcal{A}^{I}} A_{k}^{O}}(\{x_{2}, x_{3}, x_{4}\}) = \{x_{2}, x_{3}, x_{4}\} , \underline{\sum\limits_{\mathcal{A}^{I}} A_{k}^{O}}(\{x_{6}\}) = \{x_{6}\} , \overline{\sum\limits_{\mathcal{A}^{I}} A_{k}^{O}}(\{x_{1}, x_{5}\}) = \{x_{1}, x_{5}\} , \overline{\sum\limits_{\mathcal{A}^{I}} A_{k}^{O}}(\{x_{2}, x_{3} , x_{4}\}) = \{x_{2}, x_{3}, x_{4}\} , \overline{\sum\limits_{\mathcal{A}^{I}} A_{k}^{O}}(\{x_{6}\}) = \{x_{6}\} .
According to Definition 23, we have
Hence f(\mathcal{ID}_{L}^{O}) = (A_{1}\vee A_{2})\wedge (A_{2}\vee A_{3}\vee A_{4})\wedge (A_{3}) \wedge (A_{1}\vee A_{2}\vee A_{4}) \wedge (A_{1}) \wedge (A_{1}\vee A_{2}\vee A_{3}\vee A_{4}) = \; A_{1}\wedge A_{3} , then \{A_{1}, A_{3}\} is the IOL-reduct.
By Definition 23, we get
Then, f(\mathcal{ID}_{U}^{O}) = (A_{2}\vee A_{3}\vee A_{4})\wedge (A_{3})\wedge (A_{1}) \wedge (A_{1}\vee A_{4}) \wedge (A_{1}\vee A_{2}) \wedge (A_{1}\vee A_{3}\vee A_{4}) \wedge (A_{1}\vee A_{2}\vee A_{4}) \wedge (A_{1}) \wedge (A_{1}\vee A_{2}\vee A_{3}\vee A_{4}) = \; A_{1} \wedge A_{3} . Thus, \{A_{1}, A_{3}\} is the IOU-reduct.
6.
Conclusions
The multi-granulation reduction structures of GNDISs based on multi-granulation rough sets have been discussed in this paper, and the discernibility matrices and discernibility functions have been constructed to calculate the multi-granulation reducts of GNDISs. Furthermore, the multi-granulation reductions of DMSs and IDISs have been characterized by the discernibility matrices and discernibility functions based on the reduction theory of GNDISs. Then, the multi-granulation reduction of GNDISs could be a general model for the multi-granulation reduction of DISs by discernibility technique, which provides a theoretical foundation for designing algorithms of multi-granulation reduction of DISs. We summarize the multi-granulation reducts of three kinds of DISs in Table 7. The discernibility method is a theoretical method for computing all the reducts, and the time consumption of the algorithm designed by computing the discernibility matrices and discernibility functions to get all the reducts of a high dimensional information system is high. Then, some heuristic reduction algorithms by discernibility matrices can be designed to get a reduct. Matrix computation or dynamic redution algorithms based on discernibility matrices could also be used to improve computational efficiency of reduction algorithms. In our further work, we will explore the multi-granulation reduction of partially labelled DISs by the discernibility technique.
Author contributions
Yanlan Zhang: Conceptualization, Funding Acquisition, Formal analysis, Writing-Original Draft; Changqing Li: Conceptualization, Validation, Funding Acquisition, Writing-Review & Editing. All authors have read and approved the final version of the manuscript for publication.
Use of Generative-AI tools declaration
The authors declare that they have not used Artificial Intelligence (AI) tools in the creation of this article.
Acknowledgments
This work was supported by Grants from the Fujian Provincial Natural Science Foundation of China (Nos. 2022J01912, 2024J01803).
Conflict of interest
The authors declare that there is no conflict of interests.