Abstract
This paper presents a novel model that combines several interesting features in relation with rough sets, namely multi-granularity (which extends Pawlak’s single-granular approach), soft rough sets (where the granulation structure is defined by soft sets), and coverings that induce rough sets. Optimistic and pessimistic multi-granular models are the outcome of this hybridization. Their properties, relationships, and links with existing models are thoroughly explored. Finally, an application of the model to multi-criteria group decision making is put forward. Examples and graphical discussions illustrate the performance of this criterion.
Similar content being viewed by others
Notes
The transition from one element to the other is well known. From \(\rho \) we derive the partition \(\hat{P}=\{[u]_{\rho }:u\in U\}\), and from the partition \(\hat{P}\) we define the equivalence relation \(\rho \) according to the expression \(u\rho v\) if and only if there is \(P\in \hat{P}\) such that \(u,v\in P\).
The definition when the primitive approximation space is \((U,\hat{P})\) is trivial: \(\underline{\rho }(B)=\cup \{P\in \hat{P} \vert P\subseteq B\}\) and \(\overline{\rho }(B)=\cup \{P\in \hat{P} \vert P\cap B\ne \emptyset \}\).
References
Alcantud JCR (2002) Revealed indifference and models of choice behavior. J Math Psychol 46:418–430
Alcantud JCR, de Andrés R (2016) A segment-based approach to the analysis of project evaluation problems by hesitant fuzzy sets. Int J Comput Intell Syst 9(2):325–339
Alcantud JCR, Santos-García G (2017) A new criterion for soft set based decision making problems under incomplete information. Int J Comput Intell Syst 10:394–404
Alcantud JCR, Varela G, Santos-Buitrago B, Santos-García G, Jiménez MF (2019) Analysis of survival for lung cancer resections cases with fuzzy and soft set theory in surgical decision making. PLoS One 14(6):e0218283
Ali MI (2011) A note on soft sets, rough sets and fuzzy soft sets. Appl Soft Comput 11(4):3329–3332
Ali MI, Shabir M (2014) Logic connectives for soft sets and fuzzy soft sets. IEEE Trans Fuzzy Syst 22(6):1431–1442
Fan BJ, Tsang ECC, Li WT, Xue XP (2017) Multigranulation soft rough sets. In: 2017 international conference on wavelet analysis and pattern recognition (ICWAPR). https://doi.org/10.1109/ICWAPR.2017.8076653
Fatimah F, Rosadi D, Hakim RBF, Alcantud JCR (2018) \(N\)-soft sets and their decision making algorithms. Soft Comput 22:3829–3842
Feng F (2011) Soft rough sets applied to multicriteria group decision making. Ann Fuzzy Math Inform 2:69–80
Feng F, Li C, Davvaz B, Ali MI (2010) Soft sets combined with fuzzy sets and rough sets: a tentative approach. Soft Comput 14(9):899–911
Feng F, Liu XY, Leoreanu-Fotea V, Jun YB (2011) Soft sets and soft rough sets. Inform Sci 181(6):1125–1137
Feng F, Li Y, Cagman N (2012) Generalized uni-int decision making schemes based on choice soft sets. Eur J Oper Res 220:162–170
Han B, Li Y, Liu J, Geng S, Li H (2014) Elicitation criterions for restricted intersection of two incomplete soft sets. Knowl Based Syst 59:121–131
Herawan T, Deris MM (2011) A soft set approach for association rules mining. Knowl Based Syst 24(1):186–195
Khalil AM, Li S-G, Lin Y, Li H-X, Ma S-G (2020) A new expert system in prediction of lung cancer disease based on fuzzy soft sets. Soft Comput. https://doi.org/10.1007/s00500-020-04787-x
Kong Q, Wei Z (2017) Further study of multi-granulation fuzzy rough sets. J Intell Fuzzy Syst 32:2413–2424
Li Z, Xie T (2014) The relationships among soft sets, soft rough sets and topologies. Soft Comput 18:717–728
Li Z, Xie N, Wen G (2015) Soft coverings and their parameter reductions. Appl Soft Comput 31:48–60
Liu GL, Zhu W (2008) The algebraic structures of generalized rough set theory. Inform Sci 178(21):4105–4113
Liu Z, Alcantud JCR, Qin K, Pei Z (2019) The relationship between soft sets and fuzzy sets and its application. J Intell Fuzzy Syst 36:3751–3764
Luce RD (1956) Semiorders and a theory of utility discrimination. Econometrica 24:178–191
Maji PK, Roy AR, Biswas R (2002) An application of soft sets in a decision making problem. Comput Math Appl 44(8):1077–1083
Maji PK, Biswas R, Roy AR (2003) Soft set theory. Comput Math Appl 45(4):555–562
Mathew TJ, Sherly E, Alcantud JCR (2019) A multimodal adaptive approach on soft set based diagnostic risk prediction system. J Intell Fuzzy Syst 34:1609–1618
Molodtsov D (1999) Soft set theory-first results. Comput Math Appl 37(4):19–31
Pawlak Z (1982) Rough sets. Int J Comput Inform Sci 11(5):341–356
Qian Y, Liang J, Yao Y, Dang C (2010) MGRS: a multi-granulation rough set. Inform Sci 180:949–970
Rehman N, Ali A, Ali MI, Park C (2018) SDMGRS: soft dominance based multi granulation rough sets and their applications in conflict analysis problems. IEEE Access 6:31399–31416
Shabir M, Ali MI, Shaheen T (2013) Another approach to soft rough sets. Knowl Based Syst 40(1):72–80
Shaheen T, Ali MI, Shabir M (2020) Generalized hesitant fuzzy rough sets (GHFRS) and their application in risk analysis. Soft Comput. https://doi.org/10.1007/s00500-020-04776-0
She Y, He X (2012) On the structure of the multigranulation rough set model. Knowl Based Syst 36:81–92
Skowron A, Stepaniuk J (1996) Tolerance approximation spaces. Fundam Inform 27:245–253
Słowiński R, Vanderpooten D (2000) A generalized definition of rough approximations based on similarity. IEEE Trans Knowl Data Eng 12(2):331–336
Sun B, Ma W (2014) Soft fuzzy rough sets and its application in decision making. Artif Intell Rev 41(1):67–80
Sun B, Ma W, Chen X (2015) Fuzzy rough set on probabilistic approximation space over two universes and its application to emergency decision making. Expert Syst 32:507–521
Sun B, Ma W, Qian Y (2017a) Multigranulation fuzzy rough set over two universes and its application to decision making. Knowl Based Syst 123:61–74
Sun B, Ma W, Xiao X (2017b) Three-way group decision making based on multigranulation fuzzy decision-theoretic rough set over two universes. Int J Approx Reason 81:87–102
Wang X, Tsang E, Zhao S, Chen D, Yeung D (2007) Learning fuzzy rules from fuzzy examples based on rough set techniques. Inform Sci 177(20):4493–4514
Wang X, Zhai J, Lu S (2008) Induction of multiple fuzzy decision trees based on rough set technique. Inform Sci 178(16):3188–3202
Xiao Z, Gong K, Zou Y (2009) A combined forecasting approach based on fuzzy soft sets. J Comput Appl Math 228:326–333
Xiao Z, Chen W, Li L (2013) A method based on interval-valued fuzzy soft set for multi-attribute group decision-making problems under uncertain environment. Knowl Inf Syst 34:653–669
Xu WH, Wang QR, Zhang XT (2013) Multi-granulation rough sets based on tolerance relations. Soft Comput 17:1241–1252
Xu WH, Wang Q, Luo S (2014) Multi-granulation fuzzy rough sets. J Intell Fuzzy Syst 26(3):1323–1340
Yu G-F, li D-F, Fei W (2018) A novel method for heterogeneous multi-attribute group decision making with preference deviation. Comput Ind Eng 124:58–64
Yuksel S, Tozlu N, Dizman TH (2015) An application of multicriteria group decision making by soft covering based rough sets. Filomat 29:209–219
Yuksel S, Ergul ZG, Tozlu N (2014) Soft covering based rough sets and their application. Sci World J Article 970893, 9 p
Zhan J, Alcantud JCR (2019a) A novel type of soft rough covering and its application to multicriteria group decision making. Artif Intell Rev 52:2381–2410
Zhan J, Alcantud JCR (2019b) A survey of parameter reduction of soft sets and corresponding algorithms. Artif Intell Rev 52:1839–1872
Zhan J, Zhu K (2017) A novel soft rough fuzzy set: \(Z\)-soft rough fuzzy ideals of hemirings and corresponding decision making. Soft Comput 21:1923–1936
Zhan J, Liu Q, Davvaz B (2015) A new rough set theory: rough soft hemirings. J Intell Fuzzy Syst 28:1687–1697
Zhan J, Ali MI, Mehmood N (2017a) On a novel uncertain soft set model: \(Z\)-soft fuzzy rough set model and corresponding decision making methods. Appl Soft Comput 56:446–457
Zhan J, Liu Q, Herawan T (2017b) A novel soft rough set: soft rough hemirings and corresponding multicriteria group decision making. Appl Soft Comput 54:393–402
Zhang XH, Miao D, Liu C, Le M (2016) Constructive methods of rough approximation operators and multigranulation rough sets. Knowl Based Syst 91:114–125
Zhang H, Dong Y, Chen X (2018) The 2-rank consensus reaching model in the multigranular linguistic multiple attribute group decision making. IEEE Trans Syst Man Cybern Syst 48:2080–2094
Zhang H, Zhan J, He Y (2019) Multi-granulation hesitant fuzzy rough sets and corresponding applications. Soft Comput 23:13085–13103
Zhu W (2007) Generalized rough sets based on relations. Inform Sci 177(22):4997–5011
Zhu W (2009) Relationship between generalized rough sets based on binary relation and covering. Inform Sci 179(3):210–225
Zou Y, Xiao Z (2008) Data analysis approaches of soft sets under incomplete information. Knowl Based Syst 21(8):941–945
Acknowledgements
Jianming Zhan has received research Grant from National Natural Science Foundation of China (11961025).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Human and animal participants
This article does not contain any studies with human participants or animals performed by the authors.
Informed consent
It is submitted with the consent of all the authors.
Additional information
Communicated by A. Di Nola.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix I: Examples and graphical discussion
Example I.1 illustrates Definitions 4.1 and 4.3. It is inspired by the application in (Li et al. 2015, Section 6). Our presentation does not follow (Li et al. 2015) closely, because the goal of Li et al. is parameter reduction (and they are not concerned with decision making). In addition, the practical situation in (Li et al. 2015, Section 6) has been streamlined to emphasize the computations.
Example I.1
Let \(U=\{x_1, \ldots , x_5\}\) be a set of pilots. They are evaluated by two experts who assess whether they are sufficiently well trained with respect to four attributes \(A=\{a_1, a_2, a_3, a_4\}\). The respective evaluations are given in Table 1.
The information in Table 1 naturally produces respective SCASs, namely \(S_1=(U, C_{{\mathbf {G}_1}})\) and \(S_2=(U, C_{{\mathbf {G}_2}})\). In formal terms, we can define \({\mathbf {G}_1}=(F_1,A)\), \({\mathbf {G}_2}=(F_2,A)\) where:
and
For \(X=\{x_1, x_2, x_3, x_4\}\) we compute SCOMLA and SCOMUA by a routine application of Definition 4.1 as follows:
and
Therefore X is not MGOSRC, because it is multi-granular optimistic SC-definable.
However, X is MGPSRC, because a routine application of Definition 4.3 shows
Our next example illustrates the application of Algorithm 1 to the decision making problem posed by Example I.1:
Example I.2
Consider the situation of Example I.1. Let us apply the decision making mechanism explained in Sect. 5 in order to select one of the five pilots.
Some direct computations produce the choice values derived from the respective single-granulations, namely \(c_1(x_1)=3\), \(c_1(x_2)=2\), \(c_1(x_3)=1\), \(c_1(x_4)=2\), \(c_1(x_5)=1\), for the first case and \(c_2(x_1)=2\), \(c_2(x_2)=2\), \(c_2(x_3)=1\), \(c_2(x_4)=3\), \(c_2(x_5)=2\) for the second.
The expression of the pessimistic choice values does not depend upon the weights that we may attach to the experts. The pessimistic choice values produce the following figures by Equation (12):
In order to compute optimistic assessments, the weights of the experts are necessary. Suppose first that the opinions of both experts are equally valuable. We represent this by fixing \(\omega =(\frac{1}{2}, \frac{1}{2})\). Then, the \(\omega \)-optimistic choice values are computed by Equation (11) and produce the following figures:
If \(\omega =(\frac{3}{4}, \frac{1}{4})\) instead, so that the first opinion is 3 times as important as the second opinion, then the \(\omega \)-optimistic choice values are
To conclude the analysis, let us now describe the key facts when we decide to fix \(\lambda =\frac{1}{4}\). This selection places 3 times more importance to the optimistic than the pessimistic evaluation.
If \(\omega =(\frac{1}{2}, \frac{1}{2})\) then \(Q_{\lambda }^{\omega }(x_i) = \lambda A(x_i) + (1-\lambda ) B^{\omega }(x_i)= \frac{1}{4} A(x_i) + \frac{3}{4} B^{\omega }(x_i)\) produces the following figures:
Henceforth, the recommended choice when \(\lambda =\frac{1}{4}\) and \(\omega =(\frac{1}{2}, \frac{1}{2})\) is either \(x_1\) or \(x_4\).
If \(\omega =(\frac{3}{4}, \frac{1}{4})\) then \(Q_{\lambda }^{\omega }(x_i) = \frac{1}{4} A(x_i) + \frac{3}{4} B^{\omega }(x_i)\) produces the following figures:
Henceforth, the recommended choice when \(\lambda =\frac{1}{4}\) and \(\omega =(\frac{3}{4}, \frac{1}{4})\) is \(x_1\).
Table 2 summarizes the elements that produce our recommendations, as a function of the selected parameters.
Observe that we can also make a graphical exploration to know whether and to what extent the recommendations vary when we change the parameter \(\lambda \). Consider the case where we have fixed \(\omega =(\frac{1}{2}, \frac{1}{2})\). Then, Fig. 1 represents the values of each \(Q_{\lambda }^{\omega }(x_i)\) as a function of \(\lambda \), namely
As explained in Sect. 5, these values produce segments because \(\lambda \in [0,1]\). We can see that except when \(\lambda =1\), no matter how optimistic we are the ranking is consistently the same: pilots \(x_1\) and \(x_4\) are tied at the top, second is \(x_2\), afterwards \(x_5\) and finally \(x_3\). The only difference appears when \(\lambda =1\), i.e., we are totally pessimistic. Then, there is another tie at positions 3-4 (pilots \(x_2\) and \(x_5\)).
Figure 2 represents the values of the respective \(Q_{\lambda }^{\omega }(x_i)\) as a function of \(\lambda \) when \(\omega =(\frac{3}{4}, \frac{1}{4})\). As in the previous case, the ranking is consistently the same when \(\lambda < 1\): pilot \(x_1\) is first, \(x_4\) is second, third is \(x_2\), afterwards \(x_5\) and \(x_3\) is last. But when \(\lambda =1\), i.e., we assume the pessimistic evaluations and discard the optimistic figures, the ranking is the same as in the case \(\omega =(\frac{1}{2}, \frac{1}{2})\): there are ties at positions 1-2 (pilots \(x_1\) and \(x_4\)) and also at positions 3-4 (pilots \(x_2\) and \(x_5\)).
The latter observation is consistent with the fact that when \(\lambda = 1\), we only use the pessimistic evaluations which are independent of \(\omega \).
Remark I.3
The examples in this “Appendix” have a feature that allows us to picture the influence of the relative weights of the granulations (for a fixed risk factor \(\lambda \)): the corresponding values of \(Q_{\lambda }^{\omega }(x_i)\) produce segments too. The key fact is that \(\omega \) can be characterized by its first component \(\omega _1\) because \(\omega = (\omega _1, 1-\omega _1).\) We see this in Fig. 3: when we fix \(\lambda = \frac{1}{3}\), this figure represents the values of each \(Q_{\lambda }^{\omega }(x_i)\) as a function of \(\omega _1\), which is the weight of the first opinion.
For illustration, \(Q_{\lambda }^{\omega }(x_1) = \frac{1}{3}\cdot 2 + \frac{2}{3} (3\cdot \omega _1 + 2 (1 - \omega _1))= \frac{2}{3}(3 + \omega _1)\). Similar computations produce \(Q_{\lambda }^{\omega }(x_2) = \frac{5}{3}\), \(Q_{\lambda }^{\omega }(x_3) = \frac{2}{3} \), \(Q_{\lambda }^{\omega }(x_4) = \frac{2}{3}(4- \omega _1)\), and \(Q_{\lambda }^{\omega }(x_5) = \frac{1}{3}(5 - 2 \omega _1)\).
Some conclusions may be drawn from the representation captured by Fig. 3. When \(\lambda = \frac{1}{3}\), pilot \(x_4\) should be selected if the first opinion weighs less than the second one. This is because \(Q_{\lambda }^{\omega }(x_1)\) and \(Q_{\lambda }^{\omega }(x_4)\) intersect at \(\omega _1 = 0.5\). If both opinions are equally valuable (i.e., \(\omega _1 = 0.5\)), pilots \(x_1\) and \(x_4\) are deemed equally skilled, and both are strictly better than the other three pilots. Pilot \(x_1\) is recommended when the first opinion weighs more than the second one.
Appendix II: Proofs
In this section, we prove the results stated in previous sections.
Proof of Theorem 4.2
Statement (1) is immediate, and it crucially depends on the fact that we are using SCASs, i.e., that the soft sets are full. It is also immediate that (2) holds true.
In order to prove (3), observe that the first inclusion is obvious. The fact that each \(S_i\) is a SCAS will be helpful to prove \(X \subseteq FS^{\sum {S_i}}_0(X)\). This fact implies that each \((F_i, A)\) is a full soft set. Pick any \(x\in X\), then for each \(i=1, \ldots , m\) there must exist \(a_i\in A\) such that \(x\in F_i(a_i)\). In particular, \(X\cap F_i(a_i)\ne \varnothing \).
It is now clear that \(x\in \cap _{i\in I} ( \cup \{F_i(a)\vert \ X\cap F_i(a)\ne \varnothing , a\in A \} )\) because for each \(i=1, \ldots , m\), \(x\in F_i(a_i) \subseteq \cup \{F_i(a)\vert \ X\cap F_i(a)\ne \varnothing , a\in A \}\).
Let us prove (4). In order to check that \(FS_{\sum {S_i}}^0\) is monotonic, observe that when \(X\subseteq Y\subseteq U\), the property \(F_i(a)\subseteq X\) implies \(F_i(a)\subseteq Y\). Hence,
To check that \(FS^{\sum {S_i}}_0\) is monotonic, fix \(X\subseteq Y\subseteq U\) and \(i\in I=\{1, \ldots , m\}\). If a is such that \(X\cap F_i(a)\ne \varnothing \), clearly \(Y\cap F_i(a)\ne \varnothing \) holds too. Therefore for each \(i\in I\),
and now the conclusion is obvious by taking intersections.
In order to prove (5), observe first that \(FS_{\sum {S_i}}^0(X) \subseteq X\) holds true, which in turn yields \(FS_{\sum {S_i}}^0(FS_{\sum {S_i}}^0(X))\subseteq FS_{\sum {S_i}}^0(X)\) by a combination of (3) and (4). In order to prove the inclusion \(FS_{\sum {S_i}}^0(X)\subseteq FS_{\sum {S_i}}^0(FS_{\sum {S_i}}^0(X))\), notice that by construction, when \(x\in F_i(a)\subseteq X\) it must always be the case that \(F_i(a)\subseteq FS_{\sum {S_i}}^0(X)\), for any \(i\in I\) and \(a\in A\). Therefore \(x\in FS_{\sum {S_i}}^0(X)\) implies that there is \(x\in F_i(a)\subseteq FS_{\sum {S_i}}^0(X)\), for some \(i\in I\) and \(a\in A\). Hence, \(x\in FS_{\sum {S_i}}^0(X)\) implies \(x\in FS_{\sum {S_i}}^0(FS_{\sum {S_i}}^0(X))\).
Regarding (6), observe that the fact \(FS^{\sum {S_i}}_0 (X) \subseteq FS^{\sum {S_i}}_0(FS^{\sum {S_i}}_0(X))\) is again a consequence of statements (3) and (4).
Claim (7) holds by an argument in the proof of (5). When \(x\in F_i(a)\) it must always be the case that \(F_i(a)\subseteq FS_{\sum {S_i}}^0(F_i(a))\). And \(FS_{\sum {S_i}}^0(F_i(a))\subseteq F_i(a)\) holds by (3).
In order to prove (8), first let us select \(x\in \sim FS_{\sum {S_i}}^0(X)\). By the construction of \(FS_{\sum {S_i}}^0\), for each \(i\in I\) and \(a\in A\), it must be the case that \(F_i(a)\cap (\sim X) \ne \varnothing \). Therefore x verifies the condition for belonging to \(FS^{\sum {S_i}}_0(\sim X)\).
Secondly, let us select \(x\in \sim FS^{\sum {S_i}}_0(X)\). By the construction of \(FS^{\sum {S_i}}_0\), there must be \(i\in I\) such that \(F_i(a)\cap X = \varnothing \) for each \(a\in A\). Equivalently, there is \(i\in I\) such that \(F_i(a)\subseteq \sim X \) for each \(a\in A\). This fact ensures \(x \in FS_{\sum {S_i}}^0(\sim X)\). \(\square \)
Proof of Theorem 4.4
It is immediate that statements (1) and (2) hold true.
In order to prove (3), observe that the first inclusion is obvious. Proposition 4.5 and (3) in Theorem 4.2 justify the inclusion \(X \subseteq FS^{\sum {S_i}}_P(X)\).
Let us prove (4). In order to check that \(FS_{\sum {S_i}}^P\) is anti-monotonic, observe that when \(X\subseteq Y\subseteq U\) and \(x\in FS_{\sum {S_i}}^P(Y)\), there must be a with the property that \(x\in F_i(a)\) whenever \(F_i(a)\subseteq Y\). Hence, a verifies the property that \(x\in F_i(a)\) whenever \(F_i(a)\subseteq X\), which guarantees \(x\in FS_{\sum {S_i}}^P(X)\).
To check that \(FS^{\sum {S_i}}_P\) is monotonic, fix \(X\subseteq Y\subseteq U\) and observe that \(x\in FS^{\sum {S_i}}_P(X)\) entails the existence of \(F_i(a)\) such that \(x\in F_i(a)\) and \(X\cap F_i(a)\ne \varnothing \). Clearly \(Y\cap F_i(a)\ne \varnothing \) holds too, thus \(x\in FS^{\sum {S_i}}_P(Y)\).
In order to justify (5), observe that it is a routine application of the first claim in (4) to the inclusions given by (3), Proposition 4.5, and (3) in Theorem 4.2.
Let us prove (6). The inclusion \(FS^{\sum {S_i}}_P(FS^{\sum {S_i}}_O(X)) \subseteq FS^{\sum {S_i}}_P (X)\) is fairly immediate because \(x\in FS^{\sum {S_i}}_P(FS^{\sum {S_i}}_O(X))\) means that \(x\in F_i(a)\) for some \(F_i(a)\) with the property \(F_i(a)\cap FS^{\sum {S_i}}_O(X)\ne \varnothing \). And because \(X\subseteq FS^{\sum {S_i}}_O(X)\), \(F_i(a)\) verifies \(F_i(a)\cap X\ne \varnothing \) thus ensuring \(x\in FS^{\sum {S_i}}_P (X)\). The other inclusions in (6) are a routine application of the second claim in (4) to the inclusions given by (3), Proposition 4.5, and (3) in Theorem 4.2. \(\square \)
Rights and permissions
About this article
Cite this article
Alcantud, J.C.R., Zhan, J. Multi-granular soft rough covering sets. Soft Comput 24, 9391–9402 (2020). https://doi.org/10.1007/s00500-020-04987-5
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00500-020-04987-5