Skip to main content

Granulation of Knowledge: Similarity Based Approach in Information and Decision Systems

  • Reference work entry
Encyclopedia of Complexity and Systems Science

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 3,499.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 549.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Abbreviations

Knowledge:

The notion of knowledge can be described by positing the structure of the perceivable world as a system of states of things. These states are related among themselves by relations which in turn form a network of interdependent combinations. States and relations, as well as dependencies among them, are reflected in knowledge: things are reflected in objects or notions, relations in notions or concepts, states of things in sentences. Sentences form knowledge [6]; for instance, in Artificial Intelligence, a judiciously chosen set of sentences forms a knowledge base for a logical agent (see [52]).

Knowledge representation :

A chosen symbolic system (language) by means of which notions are encoded and reasoning is formalized.

Description logic :

A language of concepts commonly used for knowledge representation in Artificial Intelligence logical systems (see [2]). The variant which is interesting to us is constructed in the attribute‐value context in which objects are described in terms of chosen attributes (features) and their values. Primitive formulas of description logic are descriptors of the form \( { (a=v) } \) where a is an attribute and v is one of its values. Descriptors are interpreted in the universe U of objects: the meaning \( { [(a=v)] } \) of the descriptor \( { (a=v) } \) is defined as the set \( { \{u\in U \colon a(u)=v\} } \). Descriptors are extended to formulas of description logic with the help of connectives of sentential calculus: when \( { \alpha, \beta } \) are formulas then \( { \alpha\vee\beta } \), \( { \alpha\wedge\beta } \), \( { \neg\alpha } \), \( { \alpha\Rightarrow\beta } \) are formulas as well. The meaning of formulas is defined by recursion: \( { [\alpha\vee\beta]=[\alpha]\cup [\beta] } \), \( { [\alpha\wedge\beta]=[\alpha]\cap [\beta] } \), \( { [\neg\alpha]=U\setminus [\alpha] } \). The meaning of implication \( { \alpha\Rightarrow\beta } \) depends on the chosen interpretation of implication; when it is interpreted as in sentential calculus, that is, as \( { \neg\alpha\vee\beta } \), the meaning is already defined by the given formulas for \( { \vee, \neg } \).

Rough sets: knowledge as classification:

Rough set theory proposed by Pawlak [30,32], understands knowledge as classification: a set of equivalence relations, each of which is understood as a certain classification of objects into its equivalence classes. Together, these relations constitute the knowledge base \( { \mathcal{R} } \). Objects that belong in one class \( { [u]_R } \) of a relation R in the knowledge base are R‑indiscernible. Objects that belong in the intersection \( { \bigcap_{R\in {\mathcal{R}}}[u]_R } \) are \( { {\mathcal{R}} } \) ‑indiscernible: they cannot be discerned by means of the knowledge available in \( { \mathcal{R} } \).

Indiscernibility relations are formally defined for sets of relations: given \( { \mathcal{T} } \)\( { \mathcal{R} } \), the \( { \mathcal{T} } \)‑indiscernibility relation \( { \operatorname{ind}({\mathcal{T}}) } \) is defined as the intersection \( { \bigcap_{R\in {\mathcal{T}}}R } \). Concepts are divided into two classes: \( { {\mathcal{T}} } \) ‑exact concepts are sets of objects which can be expressed as unions of \( { \mathcal{T} } \)‑indiscernibility classes; other concepts are \( { {\mathcal{T}} } \) ‑rough. For each concept X, there exist the greatest exact concept \( { \underline{\mathcal{T}} } \) contained in X (the \( { {\mathcal{T}} } \) ‑lower approximation to X) and the smallest exact concept \( { \underline{\mathcal{T}} } \) which contains X (the \( { {\mathcal{T}} } \) ‑upper approximation to X). Rough concepts are perceived as uncertain sets sandwiched between a lower approximation and an upper approximation.

Information system:

A system commonly used for representing information about a given world fragment. An information system can be represented as a collection of pairs of the form \( { (U,A) } \) where U is a set of objects – representing things – and A is a set of attributes; each attribute a is modeled as a mapping \( a\colon U\rightarrow V_a \) from the set of objects into the value set V a . For an attribute a, the equivalence relation R a is the set \( \{(u,v)\in U\times U\colon a(u)=a(v)\} \) and the collection \( \mathcal{R}=\{R_a\colon a\in A\} \) is the knowledge base. For each set \( { B\subseteq A } \) of attributes, the B ‑indiscernibility relation \( { \operatorname{ind}(B) } \) is the set \( \{(u,v)\in U\times U\colon a(u)=a(v)\text{ for each }a\in B\} \). Each object \( { u\in U } \) can be described in two ways: first, by means of its indiscernibility class \( [u]_A=\{v\in U\colon (u,v) \in \operatorname{ind}(A)\} \); next, by means of its descriptor logic formula \( \phi^A(u)\colon \bigwedge_{a\in A}(a=a(u)) \); as \( { [\phi^A(u)]=[u]_A } \), both descriptions are equivalent and u is described up to its equivalence class.

Decision system:

A particular form of an information system, a triple \( { (U,A,d) } \) in which d is the decision , the attribute not in A, that expresses the evaluation of objects by an external oracle, an expert. Attributes in A are called conditional which expresses the fact that they are set by us as conditions describing objects in our language. The principal problem related to a decision system is to find a description of dependence between conditional knowledge given by \( { (U,A) } \) and the expert knowledge \( { (U,\{d\}) } \).

Decision rule:

decision rule is a formula in descriptor language that expresses a particular relation among conditional attributes in the attribute set A and the decision d, of the form: \( { \bigwedge_{a\in A}(a=v_a)\Rightarrow (d=v) } \) with the semantics defined in (Glossary: Description logic). The formula is true, or certain in case \( { [\bigwedge_{a\in A}(a=v_a)]=\bigcap_{a\in A}[(a=v_a)] \subseteq [(d=v)] } \). Otherwise, the formula is partially true, or possible. An object o which satisfies the rule \( { a(o)=v_a } \) for \( { a\in A } \) can be assigned to the class \( { [(d=v)] } \); often a partial match based on a chosen similarity measure must be performed.

Similarity :

Similarity relations are weaker forms of indiscernibility: indiscernible objects are similar but not necessarily vice versa an example attributable to Henri Poincaré [35] explains this best: assume that points \( { x,y } \) in the real line are similar whenever their distance is less than a fixed threshold δ: \( { |x-y|<\delta } \). Then certainly each x is similar to x, and if x is similar to y then y is similar to x, but from x similar to y and y similar to z it need not follow that x is similar to z: \( { |x-z| } \) can be close to \( { 2\delta } \). Relations of this kind are reflexive and symmetric: they were called tolerance relations in Zeeman [67]. Tolerance relations in the framework of rough set theory information systems were considered in Nieminen [26] and tolerance reducts and related notions in information systems were discussed in Polkowski, Skowron and Zytkow [50]. Some authors relax tolerance relations to similarity relations that need not be symmetric: for instance, rough inclusions need not be symmetric [37].

In many applications, metrics are taken as similarity measures: a metric on a set U is a function \( d\colon U\times U\rightarrow R^+ \) into non‐negative reals such that 1. \( { d(x,y)=0 } \) if and only if \( { x=y } \). 2. \( { d(x,y)=d(y,x) } \). 3. \( d(x,y)\leq d(x,z) + d(z,y) \) for each \( { x,y,z } \) in X (the triangle inequality); similarity induced by a metric d is usually of the form \( { d(x,y)<\delta } \) for a certain \( { \delta > 0 } \), Poincaré‐style.

Rough inclusion:

A ternary relation μ on a set \( U\times U\times [0,1] \) which satisfies conditions: 1. \( \mu(x, x,1) \). 2. \( \mu(x,y,1) \) is a binary partial order relation on the set X. 3. \( \mu(x, y,1) \) implies that for each object z in U: if \( \mu(z, x,r) \) then \( \mu(z, y,r) \). 4. \( \mu(x, y,r) \) and \( { s<r } \) imply that \( \mu(x, y,s) \). The formula \( \mu(x, y,r) \) is read as “the object x is a part in object y to a degree at least r”. The partial containment idea encompasses the idea of an exact part, that is, mereological theory of concepts Leśniewski [15]. Similarity induced by a rough inclusion can be introduced as an asymmetric relation: \( \mu(x, y,r) \), or a symmetric one: \( \mu(x,y,r) \wedge\mu(y,x,r) \).

Granulation of knowledge:

The concept of granulation was proposed by Zadeh [67]. In fuzzy calculus, granules are naturally induced as the inverse images of fuzzy membership functions and fuzzy computing is a fortiori computing with granules. A similar case happens with rough sets, as primitive objects in this paradigm are indiscernibility classes that are elementary granules, whereas their unions are granules of knowledge; reasoning in rough sets means, by definition, reasoning with granules. Each similarity relation induces granules understood as its classes; in comparison to indiscernibility relations, granules induced by similarity relations are more intricate due to lack of transitivity: relations among granules are more difficult to ascertain. In defining granules, attention is focused on the property that singles out objects belonging in the granule; usually, this property is a relation to an object chosen as the granule center: this means that distinct objects in the granule need not be in the relation. This feature distinguishes granules from clusters: aggregates in which any two objects have the defining property, for example, distance less than the chosen threshold. Also, relations between pairs of distinct granules are difficult to ascertain, contrary to the case of (for example) clusters, in which case the distance between distinct clusters is kept above a fixed threshold.

Classification of objects:

Classification entails assigning to each element in a set of objects (test sample) a class (a decision) to which the given element should belong; it is accomplished on the basis of knowledge induced from the given collection of examples (the training sample). To perform this task, objects are usually mapped onto vectors in a multi‐dimensional real vector space (feature space). Paradigms invented to perform classification are many, from heuristics based on cognitive networks such as neural networks, through probabilistic tools such as Bayesian networks to tools based on distance such as k-nn methods and prototype methods (see [7,12]).

Fusion of knowledge:

A combination of knowledge coming from few distinct sources. For example, in a mobile robot, fusion of knowledge by means of a Kalman filter (see [52]), consists in producing an output – a robot's predicted position at the current moment – on the basis of inputs: a robot's position estimated as the last moment preceding the current moment, the current controls, and sensor readings in the current position.

Reasoning:

Processes of reasoning include an effort by means of which sentences are created; various forms of reasoning depend on the chosen system of notions, symbolic representation of notions, forms of manipulating symbols (see [6]).

Mereology:

Mereology, see Leśniewski [15], is a theory of concepts based on the notion of a part instead – as with naive set theory – on the notion of an element. The relation of being a part is irreflexive and transitive. A secondary notion of an ingredient means either being a part or being the whole object and it sets a partial order on objects. By means of the notion of an ingredient the notion of a class operator is introduced in Leśniewski [15] which converts ontological notions, that is, collections of objects into an individual object; it is used by us in granule formation.

Bibliography

  1. Artiemjew P (2007) Classifiers from granulated data sets: Concept dependent andlayered granulation. In: Proceedings RSKD'07, Workshop at ECML/PKDD'07. Warsaw Univ Press, Warsaw, pp 1–9

    Google Scholar 

  2. Baader F, Calvanese D, McGuinness D, Nardi D, Patel-Schneider P (eds) (2004) Thedescription logic handbook: theory, implementation and applications. Cambridge U Press, Cambridge

    Google Scholar 

  3. Bazan JG (1998) A comparison of dynamic and non‐dynamic rough setmethods for extracting laws from decision tables. In: Polkowski L, Skowron A (eds) Rough sets in knowledge discovery 1. Physica, Heidelberg,pp 321–365

    Google Scholar 

  4. Bazan JG, Nguyen HS, Nguyen SH, Synak P, Wróblewski J (2000) Rough setalgorithms in classification problems. In: Polkowski L, Tsumoto S, Lin TY (eds) Rough set methods and applications. New developments in knowledgediscovery in information systems. Physica, Heidelberg, pp 49–88

    Google Scholar 

  5. Bishop CM (1997) Neural networks for pattern recognition, 2nd edn. ClarendonPress, Oxford

    Google Scholar 

  6. Bocheński JM (1954) Die zeitgenössischen Denkmethoden. A FranckeAG, Bern

    Google Scholar 

  7. Duda RO, Hart PE, Stork DG (2001) Pattern classification. Wiley, NewYork

    MATH  Google Scholar 

  8. Greco S, Matarazzo B, Słowiński R (1999) On joint use ofindiscernibility, similarity and dominance in rough approximation of decision classes. In: Proceedings of the 5th International Conference of the DecisionSciences Institute, Athens, Greece. pp 1380–82

    Google Scholar 

  9. Grzymala‐Busse JW (1992) LERS – a system for learning fromexamples based on rough sets. In: Słowiński R (ed) Intelligent decision support: Handbook of advances and applications of the rough setstheory. Kluwer, Dordrecht, pp 3–18

    Google Scholar 

  10. Grzymala‐Busse JW (2004) Data with missing attribute values:Generalization of indiscernibility relation and rule induction. In: Transactions on rough sets, vol I. Lecture Notes in Computer Science, vol 3100. Springer, Berlin, pp 78–95

    Google Scholar 

  11. Grzymala‐Busse JW, Hu M (2000) A comparison of several approachesto missing attribute values in data mining. In: Rough sets and current trends in computing. Lecture Notes in CS, vol 2005. Springer, Berlin,pp 378–385

    Google Scholar 

  12. Klösgen W, Żytkow J (eds) (2002) Handbook of data mining andknowledge discovery. Oxford University Press, Oxford

    Google Scholar 

  13. Krawiec et al (1998) Learning decision rules from similarity based roughapproximations. In: Polkowski L, Skowron A (eds) Rough sets in knowledge discovery 2. Physica, Heidelberg,pp 37–54

    Google Scholar 

  14. Kuratowski K, Mostowski A (1966) Set theory. Polish ScientificPublishers, Warsaw

    MATH  Google Scholar 

  15. Leśniewski S (1916) Podstawy ogólnej teoryi mnogosci (On thefoundations of set theory), in Polish. The Polish Scientific Circle, Moscow. see also a later digest: Topoi, vol 2, 1982,pp 7–52

    Google Scholar 

  16. Lin TY (1988) Neighborhood systems and relational database. Abstract. In:Proceedings of CSC'88, 1988. p 725

    Google Scholar 

  17. Lin TY (1989) Neighborhood systems and approximation in database and knowledgebased systems. In: Proceedings of the 4th International Symposium on Methodologies of Intelligent Systems, Poster Session,1989. pp 75–86

    Google Scholar 

  18. Lin TY (1992) Topological and fuzzy rough sets. In: Słowiński R (ed)Intelligent decision support‐handbook of applications and advances of the rough sets theory. Kluwer, Dordrecht,pp 287–304

    Google Scholar 

  19. Lin TY (1997) From rough sets and neighborhood systems to informationgranulation and computing with words. In: Proceedings of the European Congress on Intelligent Techniques and SoftComputing. pp 1602–1606

    Google Scholar 

  20. Lin TY (1999) Granular computing: fuzzy logic and rough sets. In: Zadeh LA,Kacprzyk J (eds) Computing with words in information/intelligent systems 1. Physica, Heidelberg, pp 183–200

    Google Scholar 

  21. Lin TY (2003) Granular computing. In: Lecture Notes in Computer Science, vol 2639. Springer, Berlin, pp 16–24

    Google Scholar 

  22. Lin TY (2005) Granular computing: Examples, intuitions, and modeling. In:Proceedings of IEEE 2005 Conference on Granular Computing GrC05, Beijing, China, 2005. IEEE Press, New York,pp 40–44

    Google Scholar 

  23. Lin TY (2006) A roadmap from rough set theory to granular computing. In:Proceedings RSKT 2006, 1st International Conference on Rough Sets and Knowledge Technology, Chongqing, China, 2006. Lecture Notes in ArtificialIntelligence, vol 4062. Springer, Berlin, pp 33–41

    Google Scholar 

  24. Ling C-H (1965) Representation of associative functions. Publ Math Debrecen12:189–212

    MathSciNet  Google Scholar 

  25. Michalski RS et al (1986) The multi‐purpose incremental learning systemAQ15 and its testing to three medical domains. In: Proceedings of AAAI-86. Morgan Kaufmann, San Mateo CA,pp 1041–1045

    Google Scholar 

  26. Nieminen J (1988) Rough tolerance equality and tolerance blackboxes. Fundamenta Informaticae 11:289–296

    MathSciNet  MATH  Google Scholar 

  27. Nguyen SH (2000) Regularity analysis and its applications in Data Mining. In:Polkowski L, Tsumoto S, Lin TY (eds) Rough set methods and applications. New developments in knowledge discovery in information systems. Physica,Heidelberg, pp 289–378

    Google Scholar 

  28. Novotny M, Pawlak Z (1988) Partial dependency of attributes. Bulletin PolishAcademy. Ser Sci Math 36:453–458

    MathSciNet  MATH  Google Scholar 

  29. Novotny M, Pawlak Z (1992) On a problem concerning dependencespaces. Fundamenta Informaticae 16:275–287

    MathSciNet  MATH  Google Scholar 

  30. Pawlak Z (1982) Rough sets. Intern J Comput Inf Sci11:341–356

    MathSciNet  MATH  Google Scholar 

  31. Pawlak Z (1985) On rough dependency of attributes in informationsystems. Bulletin Polish Academy. Ser Sci Tech 33:551–559

    MathSciNet  MATH  Google Scholar 

  32. Pawlak Z (1991) Rough sets: theoretical aspects of reasoning aboutdata. Kluwer, Dordrecht (Holland)

    MATH  Google Scholar 

  33. Pawlak Z, Skowron A (1993) A rough set approach for decision rulesgeneration. In: Proceedings of IJCAI'93 Workshop W12. The Management of Uncertainty in AI, France, 1993. also ICS Research Report 23/93, Warsaw Universityof Technology, Institute of Computer Science

    Google Scholar 

  34. Pawlak Z, Skowron A (1994) Rough membership functions. In: Yaeger RR,Fedrizzi M, Kasprzyk J (eds) Advances in the Dempster–Schafer Theory of evidence. Wiley, New York,pp 251–271

    Google Scholar 

  35. Poincaré H (1905) Science and hypothesis. Walter Scott Publishing,London

    Google Scholar 

  36. Polkowski L (2002) Rough sets. mathematical foundations. Physica,Heidelberg

    MATH  Google Scholar 

  37. Polkowski L (2003) A rough set paradigm for unifying rough set theory andfuzzy set theory. In: Proceedings RSFDGrC03, Chongqing, China, 2003. Lecture Notes in AI, vol 2639. Springer, Berlin, pp 70–78. also in:Fundamenta Informaticae 54:67–88

    Google Scholar 

  38. Polkowski L (2004) A note on 3‑valued rough logic acceptingdecision rules. Fundamenta Informaticae 61:37–45

    MathSciNet  MATH  Google Scholar 

  39. Polkowski L (2004) Toward rough set foundations. Mereological approach. In:Proceedings RSCTC04, Uppsala, Sweden. Lecture Notes in AI, vol 3066. Springer, Berlin, pp 8–25

    Google Scholar 

  40. Polkowski L (2005) Formal granular calculi based on rough inclusions. In:Proceedings of IEEE 2005 Conference on Granular Computing GrC05, Beijing, China. IEEE Press, New York,pp 57–62

    Google Scholar 

  41. Polkowski L (2005) Rough‐fuzzy‐neurocomputing based on roughmereological calculus of granules. Intern J Hybrid Intell Syst 2:91–108

    MATH  Google Scholar 

  42. Polkowski L (2006) A model of granular computing with applications. In:Proceedings of IEEE 2006 Conference on Granular Computing GrC06, Atlanta, USA. IEEE Press, New York, pp 9–16

    Google Scholar 

  43. Polkowski L (2007) Granulation of knowledge in decision systems: The approachbased on rough inclusions. The method and its applications. In: Proceedings RSEiSP'07 (in memory of Z. Pawlak). Lecture Notes in AI, vol 4585. pp 69–79

    Google Scholar 

  44. Polkowski L (2007) The paradigm of granular rough computing. In: ProceedingsICCI'07. 6th IEEE Intern. Conf. on Cognitive Informatics. IEEE Computer Society, Los Alamitos, pp 145–163

    Google Scholar 

  45. Polkowski L, Artiemjew P (2007) On granular rough computing: Factoringclassifiers through granular structures. In: Proceedings RSEISP'07, Warsaw. Lecture Notes in AI, vol 4585. Springer, Berlin,pp 280–289

    Google Scholar 

  46. Polkowski L, Semeniuk‐Polkowska M (2005) On rough set logics based onsimilarity relations. Fundamenta Informaticae 64:379–390

    Google Scholar 

  47. Polkowski L, Skowron A (1994) Rough mereology. In: Proceedings ofISMIS'94. Lecture notes in AI, vol 869. Springer, Berlin, pp 85–94

    Google Scholar 

  48. Polkowski L, Skowron A (1997) Rough mereology: a new paradigm forapproximate reasoning. Intern J Approx Reason 15(4):333–365

    MathSciNet  Google Scholar 

  49. Polkowski L, Skowron A (1999) Towards an adaptive calculus ofgranules. In: Zadeh LA, Kacprzyk J (eds) Computing with words in information/intelligent systems,1. Physica, Heidelberg,pp 201–228

    Google Scholar 

  50. Polkowski L, Skowron A, Żytkow J (1994) Tolerance based rough sets. In:Lin TY, Wildberger M (eds) Soft computing: rough sets, fuzzy logic, neural networks, uncertainty management, knowledge discovery. Simulation Councils Inc,San Diego CA, pp 55–58

    Google Scholar 

  51. RSES: Skowron A et al (1994) A system for dataanalysis. http://logic.mimuw.edu.pl/%7Erses/. Accessed 19 Sept 2008

  52. Russell SJ, Norvig P (2003) Artificial intelligence. A modern approach,2nd edn. Prentice Hall (Pearson Education Inc), Upper Saddle River NJ

    Google Scholar 

  53. Skowron A (1993) Boolean reasoning for decision rules generation. In:Komorowski J, Ras Z (eds) Proceedings of ISMIS'93. Lecture Notes in AI, vol 689. Springer, Berlin, pp 295–305

    Google Scholar 

  54. Skowron A, Rauszer C (1992) The discernibility matrices and functions indecision systems. In: Słowiński R (ed) Intelligent decision support. handbook of applications and advances of the rough sets theory. Kluwer,Dordrecht, pp 311–362

    Google Scholar 

  55. Skowron A, Stepaniuk J (2001) Information granules: towards foundations ofgranular computing. Intern J Intell Syst 16:57–85

    MATH  Google Scholar 

  56. Skowron A, Swiniarski RW (2004) Information granulation and patternrecognition. In: Pal SK, Polkowski L, Skowron A (eds) Rough – neural computing. Techniques for computing with words. Springer, Berlin,pp 599–636

    Google Scholar 

  57. Stefanowski J (1998) On rough set based approaches to induction of decisionrules. In: Polkowski L, Skowron A (eds) Rough sets in knowledge discovery 1. Physica, Heidelberg,pp 500–529

    Google Scholar 

  58. Stefanowski J (2007) On combined classifiers, rule induction and roughsets. In: Transactions on rough sets, vol VI. Lecture Notes in Computer Science, vol 4374. Springer, Berlin,pp 329–350

    Google Scholar 

  59. UCI Repository: available athttp://archive.ics.uci.edu/ml/. Accessed 19 Sept 2008

  60. van Benthem J (1988) A manual of intensional logic. CSLI StanfordUniversity, Stanford CA (US)

    MATH  Google Scholar 

  61. Wille R (1982) Restructuring lattice theory: An approach based on hierarchiesof concepts. In: Rival I (ed) Ordered sets. Reidel, Dordrecht, pp 445–470

    Google Scholar 

  62. Wojna A (2005) Analogy‐based reasoning in classifierconstruction. In: Transactions on Rough Sets vol IV. Lecture Notes in Computer Science, vol 3700. Springer, Berlin,pp 277–374

    Google Scholar 

  63. Wróblewski J (2004) Adaptive aspects of combining approximationspaces. In: Pal SK, Polkowski L, Skowron A (eds) Rough – neural computing. Techniques for computing with words. Springer, Berlin,pp 139–156

    Google Scholar 

  64. Yao YY (2000) Granular computing: Basic issues and possible solutions. In:Proceedings of the 5th Joint Conference on Information Sciences I. Assoc Intell Machinery, Atlantic NJ,pp 186–189

    Google Scholar 

  65. Yao YY (2005) Perspectives of granular computing. In: Proceedings of IEEE 2005Conference on Granular Computing GrC05, Beijing, China. IEEE Press, New York, pp 85–90

    Google Scholar 

  66. Zadeh LA (1979) Fuzzy sets and information granularity. In: Gupta M,Ragade R, Yaeger RR (eds) Advances in fuzzy set theory and applications. North-Holland, Amsterdam, pp 3–18

    Google Scholar 

  67. Zeeman EC (1965) The topology of the brain and the visual perception. In: FortMK (ed) Topology of 3‑manifolds and selected topics. Prentice Hall, Englewood Cliffs NJ, pp 240–256

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag

About this entry

Cite this entry

Polkowski, L. (2009). Granulation of Knowledge: Similarity Based Approach in Information and Decision Systems. In: Meyers, R. (eds) Encyclopedia of Complexity and Systems Science. Springer, New York, NY. https://doi.org/10.1007/978-0-387-30440-3_262

Download citation

Publish with us

Policies and ethics