Abstract
Conventional clustering algorithms classify a set of objects into some clusters with clear boundaries, that is, one object must belong to one cluster. However, many objects belong to more than one cluster in real world since the boundaries of clusters generally overlap with each other. Fuzzy set representation of clusters makes it possible for each object to belong to more than one cluster. On the other hand, it is pointed out that the fuzzy degree is sometimes regarded as too descriptive for interpreting clustering results. Instead of fuzzy representation, rough set one could deal with such cases. Clustering based on rough set could provide a solution that is less restrictive than conventional clustering and less descriptive than fuzzy clustering. Therefore, Lingras et al. (Lingras and Peters, Wiley Interdiscip Rev: Data Min Knowl Discov 1(1):64–72, 1207–1216, 2011, [1] and Lingras and West, J Intell Inf Syst 23(1):5–16, 2004, [2]) proposed a clustering method based on rough set, rough K-means (RKM). RKM is almost only one algorithm inspired by KM and some assumptions of RKM are very natural, however it is not useful from the viewpoint that the algorithm is not based on any objective functions. Outputs of non-hierarchical clustering algorithms strongly depend on initial values and the “better” output among many outputs from different initial values should be chosen by comparing the value of the objective function of the output with each other. Therefore the objective function plays very important role in clustering algorithms. From the standpoint, we have proposed some rough clustering algorithms based on objective functions. This paper shows such rough clustering algorithms which is based on optimization of an objective function.
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
1 Introduction
As the importance of data analysis increases, clustering techniques have been more focused [3] and more clustering algorithms have been proposed.
Conventional clustering algorithms partition a set of objects into some clusters with clear boundaries. In other words, one object must belong to one cluster. k-means (KM) [4], also called hard c-means (HCM), is representative one.
However, the boundaries may not be clear in practice and quite a few objects should belong to more than one cluster. Fuzzy set representation of clusters makes it possible for each object to belong to more than one cluster and the degree of belongingness of an object to each cluster is represented as a value in an unit interval [0, 1]. Fuzzy c-means (FCM) [5, 6] achieves the representation by introducing a fuzzification parameter into KM.
On the other hand, it is pointed out that the fuzzy degree sometimes may be too descriptive for interpreting clustering results [1]. In such cases, rough set representation is considered as an useful and powerful tool [7, 8]. The basic concept of the representation is based on two definitions of lower and upper approximations of a set. The lower approximation means that “an object surely belongs to the set” and the upper one means that “an object possibly belongs to the set”. Clustering based on rough set could provide a solution that is less restrictive than conventional clustering and less descriptive than fuzzy clustering [1, 9], and therefore, the rough set based clustering has attracted increasing interest of researchers [1, 2, 10,11,12,13,14].
Rough k-means (RKM) proposed by Lingras et al. [1, 2] is one of initial rough set based clustering. In RKM, the degree of belongingness and cluster centers are calculated by iterative process like KM or FCM. However, RKM has a problem that the algorithm is not constructed based on optimization of an objective function. Here, we call clustering based on optimization of an objective function “objective-based clustering”. In other words, calculation outputs of the objective-based clustering make the objective function minimize.
Many non-hierarchical clustering algorithms such as KM and FCM are objective-based clustering. Calculation outputs of such algorithms strongly depend on initial values. Hence, we need some indicator when we choose the “better” outputs among many outputs from different initial values. The objective functions play very important role as the indicator, that is, we can choose the “better” outputs by comparing the value of the objective function of the output with each other.
RKM is one of the most representative algorithms inspired by KM and some assumptions of RKM are very natural, however it is not useful from the viewpoint that the algorithm is not based on any objective functions because we do not have any indicator to choose “better” outputs. Some rough set based clustering algorithms based on an objective function are proposed [12], however these may be a bit complicated and not easy to expand the theoretical discussion.
We have proposed some objective-based rough clustering methods. This paper shows objective functions and algorithms of the methods, type-I rough c-means, type-II rough c-means, and rough non metric model. In each method, we show rough hard clustering and rough fuzzy one.
2 Rough Sets
2.1 Concept of Rough Sets
Let U be the universe and \(R \subseteq U \times U\) be an equivalence relation on U. R is also called indiscernibility relation. The pair \(X=(U,R)\) is called an approximation space. If \(x, y \in U\) and \((x,y) \in R\), we say that x and y are indistinguishable in X.
Equivalence classes of the relation R is called elementary sets in X. The set of all elementary sets is denoted by U / R. The empty set is also elementary in every X.
Every finite union of elementary sets in X is called a composed set in X.
Since it is impossible to distinguish the elements in the same equivalence class, we may not be able to get a precise representation for an arbitrary subset \(A \subset U\). Instead, any A can be represented by its lower and upper bounds. The upper bound \(\overline{A}\) is the least composed set in X that contains A, called the best upper approximation or, in short, the upper approximation. The lower bound \(\underline{A}\) is the greatest composed set in X that is included in A, called the best lower approximation or, briefly, the lower approximation. The set \(\mathrm{Bnd}(A) = \overline{A} - \underline{A}\) is called the boundary of A in X.
The pair \((\underline{A},\overline{A})\) is the representation of an ordinary set A in the approximation space X, or simply the rough set of A. The elements in the lower approximation of A definitely belong to A, while elements in the upper bound of A may or may not belong to A.
2.2 Conditions of Rough Clustering
Let a set of objects and a set of equivalence classes by an equivalence relation R be \(U=\{x_k \mid x_k=(x_{k1},\dots ,x_{kp})^T \in \mathfrak {R}^p, \ k=1,\dots ,n\}\) and \(U/R=\{A_i \mid i=1,\dots ,c\}\), respectively. \(v_i=(v_{i1},\dots ,v_{ip})^T \in \mathfrak {R}^p\) \((i=1,\dots ,c)\) means a cluster center of a cluster \(A_i\). We notice that \(A_i \ne \emptyset \) for any i. That is, \(\underline{A} = \emptyset \) means that \(\mathrm{Bnd}(A) \ne \emptyset \). Similarly, \(\mathrm{Bnd}(A) = \emptyset \) means that \(\underline{A} \ne \emptyset \).
Lingras et al., who proposed rough K-means (RKM) [1, 2], put the following conditions. Their conditions are very natural from the viewpoint of the definition of rough sets.
- (C1):
-
An object x can be part of at most one lower bound.
- (C2):
-
If \(x \in \underline{A}_i\), \(x \in \overline{A}_i\).
- (C3):
-
An object x is not part of any lower bound if and only if x belongs to two or more upper bounds.
Note that the above conditions are not necessarily independent or complete.
3 Type-I Rough c-Means
3.1 Type-I Rough Hard c-Means
In this section, we describe type-I rough c-means (RCM-I). In order to distinguish the later mentioned rough fuzzy c-means, we also write type-I rough hard c-means (RHCM-I).
3.1.1 Objective Function
For any objects \(x_k=(x_{k1},\dots ,x_{kp})^T\in \mathfrak {R}^p\) \((k=1,\dots ,n)\), \(\nu _{ki}\) and \(u_{ki}\) \((i=1,\dots ,c)\) mean belongingness of an object \(x_k\) to a lower approximation of \(A_i\) and a boundary of \(A_i\), respectively. Partition matrices of \(\nu _{ki}\) and \(u_{ki}\) are denoted by \(N=\{\nu _{ki}\}\) and \({U}=\{u_{ki}\}\), respectively. We define an objective function of RCM-I as follows:
where
For any k, constraints are as follows:
The last term of (1) is a regularized term. If the term does not exist, it results in trivial solutions of \(\nu _{ki}=0\) and \(u_{ki}=0\). From the above constraints, we can derive the following relation for any k:
It is obvious that these relations are equivalent to (C1)–(C3) in Sect. 2.2.
3.1.2 Derivation of Optimal Solutions and Algorithm
We’ll obtain an optimal solution to \(v_i\) with fixing \(\nu _{ki}\) and \(u_{ki}\). Here, we introduce the following function:
Since \(\nu _{ki}\) and \(u_{ki}\) are fixed, \(v_i\) which minimizes \(J^i_\text {RCM-I}\) is an optimal solution which also minimizes \(J_\text {RCM-I}\). Now, we have to consider the following two cases:
-
1.
\(\underline{A}_i \ne \emptyset \) and \(\mathrm{Bnd}(A_i) \ne \emptyset \), that is, \(|\underline{A}_i| \cdot |\mathrm{Bnd}(A_i)| \ne 0\).
-
2.
\(\underline{A}_i = \emptyset \) or \(\mathrm{Bnd}(A_i) = \emptyset \), that is, \(\nu _{ki} = 0\) or \(u_{ki} = 0\) for any k.
If \(\underline{A}_i \ne \emptyset \) and \(\mathrm{Bnd}(A_i) \ne \emptyset \), from partially differentiating (2) by \(v_i\),
From \(\frac{\partial J^i_\text {RCM-I}}{\partial v_i} = 0\),
then, we get
We here notice the following relations:
Then, (3) can be rewritten as follows:
Since \(|A_i| \cdot |\mathrm{Bnd}(A_i)|\ne 0\),
On the other hand, if \(\underline{A}_i = \emptyset \) or \(\mathrm{Bnd}(A_i) = \emptyset \), \(\nu _{ki} = 0\) or \(u_{ki} = 0\) for any k. In the both cases, \(J^i_\text {RCM-I}\) becomes the minimum value 0 in spite of \(v_i\). Therefore, we can determine \(v_i\) as follows:
From the above discussion, the optimal solution to \(v_i\) is (4).
Optimal solutions to \(\nu _{ki}\) and \(u_{ki}\) can be obtained by comparing the following two cases:
-
1.
\(x_k\) belongs to the lower approximation \(\underline{A}_{p_k}\) of which the cluster center \(v_i\) is nearest to \(x_k\). Here,
$$\begin{aligned} p_k=\arg \min _i d_{ki}. \end{aligned}$$In this case, the value of the term for \(x_k\) of the objective function can be calculated as follows:
$$\begin{aligned} J^\nu _k&=\sum _{l=1,t\ne k}^n\Big (\nu _{kp_k}u_{lp_k}(\underline{w} d_{kp_k} + \overline{w}d_{lp_k}) +(\nu _{kp_k}\nu _{lp_k} + u_{kp_k}u_{lp_k}) D_{kl} \Big ) \nonumber \\&=\sum _{l=1,l\ne k}^n \Big ( \nu _{kp_k} u_{lp_k}(\underline{w}d_{kp_k} + \overline{w}d_{lp_k}) + \nu _{kp_k} \nu _{lp_k}D_{kl} \Big ). \nonumber \end{aligned}$$ -
2.
\(x_k\) belongs to the upper approximation of two clusters \(\overline{A}_{p_k}\) and \(\overline{A}_{q_k}\) of which the cluster centers \(v_{p_k}\) and \(v_{q_k}\) are the first and second nearest to \(x_k\). Here,
$$\begin{aligned} q_k=\arg \min _{i\ne p_k}d_{ki}. \end{aligned}$$In this case, the value of the terms for \(x_k\) of the objective function can be calculated as follows:
$$\begin{aligned} J^u_k&= \sum _{l=1,l \ne k}^n \sum _{i=p_k,q_k}^c \Big (\nu _{li}u_{ki}(\underline{w}d_{li} + \overline{w}d_{ki}) + (\nu _{ki} \nu _{li} +u_{ki} u_{li}) D_{kl} \Big )\nonumber \\&= \sum _{l=1,l \ne k}^n \sum _{i=p_k,q_k}^c \Big ( \nu _{li} u_{ki} (\underline{w}d_{ki} + \overline{w}d_{ki}) + u_{ki} u_{li} D_{kl} \Big ). \nonumber \end{aligned}$$
In comparison with \(J^\nu _k\) and \(J^u_k\), we determine \(\nu _{ki}\) and \(u_{ki}\) as follows:
Here, we construct RCM-I algorithm using optimal solutions to N, V, and U which are derived in the above. In practice, the optimal solutions are calculated through iterative optimization. We show RCM-I algorithm as Algorithm 1.
We can consider the Sequential RCM-I (SRCM-I) in which cluster centers are re-calculated every time cluster partition changes as Algorithm 2.
3.2 Type-I Rough Fuzzy c-Means
We have two ways to fuzzify RCM-I by introducing fuzzy-set representation.
The first is to introduce the fuzzification parameter m, and the second is to introduce the entropy term. These ways are known to be very useful. We call the method using the first way type-I rough fuzzy c-means (RFCM-I). and that using the second way entropy-regularized type-I rough fuzzy c-means (ERFCM-I), as mentioned above. In this paper, we describe RFCM-I.
In RFCM-I, degrees of belongingness to \(\mathrm{Bnd}(A_i)\) are only fuzzified.
3.2.1 Objective Function
The objective function of RFCM-I is defined as follows:
Constraints are as follows:
3.2.2 Derivation of Optimal Solutions and Algorithm
To get an optimal solution to \(v_i\), we partially differentiate (5) with respect to \(v_i\), getting
We must consider the following two cases to derive optimal solutions to N and U:
-
1.
\(x_k\) belongs to \(\underline{A}_{p_k}\).
-
2.
\(x_k\) belongs to \(\mathrm{Bnd}(A_i). \quad \forall i\)
If \(x_k\) belongs to \(\underline{A}_{p_k}\), optimal solutions and the objective function are represented as follows:
If \(x_k\) belongs to \(\mathrm{Bnd}(A_i)\), optimal solutions and the objective function are represented as follows:
Here,
We calculate the optimal solution to \(u_{ki}\) by using the Lagrange multiplier. From (5), the Lagrange function of RFCM-I is defined as follows:
Comparing (6) and (7), the optimal solutions to N and U are as follows:
Last, we describe the algorithm of RFCM-I.
4 Type-II Rough c-Means
We propose another method: type-II rough c-means (RCM-II) or type-II rough hard c-means (RHCM-II) to solve Lingras’s problems. The objective function of RCM-II is simpler than RCM-II.
4.1 Type-II Rough Hard c-Means
4.1.1 Objective Function
Let \(N=(\nu _{ki})_{1\le k \le n , \ 1\le i \le c}\) and \(U=(u_{ki})_{1\le k \le n, \ 1\le i \le c}\) be degrees of belongingness of \(x_k\) to \(\underline{A}_i\) and \(\mathrm{Bnd}(A_i)\). Let V be a set of cluster centers. The objective function of RCM-II is defined as follows:
Constraints are as follows:
From these constraints, the following restriction holds true:
These constraints are clearly equivalent to (C1)–(C3). \(J_{\text {RCM-II}}\) is minimized under these constraints.
4.1.2 Derivation of Optimal Solutions and Algorithm
We partially differentiate (8) with respect to \(v_i\). We get
We must consider the following two cases to derive optimal solutions to N and U:
-
1.
\(x_k\) belongs to \(\underline{A}_{p_k}\).
-
2.
\(x_k\) belongs to \(\mathrm{Bnd}(A_{p_k})\) and \(\mathrm{Bnd}(A_{q_k})\).
Here,
If \(x_k\) belongs to \(\underline{A}_{p_k}\), we get the value of the objective function as follows:
If \(x_k\) belongs to \(\mathrm{Bnd}(A_{p_k})\) and \(\mathrm{Bnd}(A_{q_k})\), we get the value of the objective function as follows:
Comparing (10) and (11), we derive the optimal solution to N and U as follows:
We describe the RCM-II algorithm as follows:
4.2 Type-II Rough Fuzzy c-Means
4.2.1 Objective Function
Here, we propose another method: type-II rough fuzzy c-means (RFCM-II) to solve Lingras’s problems. RFCM-II is an extended method using the concept of fuzzy theory. In RFCM-II, degrees of belongingness to \(\mathrm{Bnd}(A_i)\) are only fuzzified. The objective function of RFCM-II is defined as follows:
Constraints are as follows:
\(J_{\text {RFCM-II}}\) is minimized under these constraints.
4.2.2 Derivation of Optimal Solutions and Algorithm
First, we derive an optimal solution of the cluster center. Similar to Sect. 4.1.2, we get
Next, we derive optimal solutions of lower approximation and boundary. Similar to Sect. 4.1.2, we must consider the following two cases to derive optimal solutions to N and U:
-
1.
\(x_k\) belongs to \(\underline{A}_{p_k}\).
-
2.
\(x_k\) belongs to \(\mathrm{Bnd}(A_{i})\) \(\forall i\).
If \(x_k\) belongs to \(\underline{A}_{p_k}\), we get the value of the objective function as follows:
If \(x_k\) belongs to \(\mathrm{Bnd}(A_i)\), we get the value of the objective function as follows:
Comparing (14) and (15), we derive the optimal solution to N and U as follows:
We calculate the optimal solution to \(u_{ki}\) by using the Lagrange multiplier method. The Lagrange function of RFCM-II is defined as follows:
From the above discussion, we describe the RFCM-II algorithm.
5 Rough Non Metric Model
5.1 Rough Hard Non Metric Model
5.1.1 Objective Function
To construct a new relational clustering algorithm based on rough sets, rough non metric model (RNM) or rough hard non metric model (RHNM), we define the following objective function based on Non Metric Model by Roubens [15]:
Here \(\underline{w}+\overline{w}=1\) and \(\underline{w} \in (0,1)\). If \(\underline{w}\) is close to 0, almost all objects belong to the lower approximation. If \(\underline{w}\) is close to 1, however, almost all objects belong to the upper approximation. \(\underline{w}\) (or \(\overline{w}\)) therefore controls belongingness and it plays a very important role in our proposed methods. \(D_{kt}\) means a dissimilarity between \(x_k\) and \(x_t\). One of the examples is a Euclidean norm:
We consider the following conditions for \(\nu _{ki}\) and \(u_{ki}\):
From (C1)–(C3) in Sect. 2.2, we derive the following constraints:
From the above constraints, we derive the following relation for any k:
It is obvious that these relations are equivalent to (C1)–(C3) in Sect. 2.2
5.1.2 Derivation of Optimal Solutions and Algorithm
Optimal solutions to \(\nu _{ki}\) and \(u_{ki}\) are obtained by comparing the following two cases for each \(x_k\):
-
1.
\(x_k\) belongs to the lower approximation \(\underline{A}_{p_k}\).
-
2.
\(x_k\) belongs to the boundaries of two clusters \(\overline{A}_{q^1_k}\) and \(\overline{A}_{q^2_k}\).
We describe the details of each case as follows.
In the first case, let us assume that \(x_k\) belongs to the lower approximation \(\underline{A}_{p_k}\). \(p_k\) is derived as follows:
The objective function J is rewritten as follows:
Here
Note that \(D_{kk}=0\) and \(D_{kt}=D_{tk}\), therefore
This means the following relations:
In this case, the value of the objective function is calculated as follows:
Here
In the second case, let us assume that \(x_k\) belongs to the boundaries of two clusters \(\overline{A}_{q^1_k}\) and \(\overline{A}_{q^2_k}\). \(q^1_k\) and \(q^2_k\) are derived as follows:
The objective function J is rewritten as follows:
Here
Therefore
This means the following relations:
In this case, the value of the objective function is calculated as follows:
Here
In comparison with \(J^\nu _k\) and \(J^u_k\), we determine \(\nu _{ki}\) and \(u_{ki}\) as follows:
From the above discussion, we show the RNM algorithm as Algorithm 6. The proposed algorithm is constructed based on iterative optimization.
5.2 Rough Fuzzy Non Metric Model
In the previous section, we proposed the RNM algorithm. In the algorithm, an object \(x_k\) belongs to just two boundaries if \(x_k\) does not belong to any lower approximation, since \(u_{ki} \in \{0,1\}\) and the objective function (16) is linear for \(u_{ki}\). In this section, we therefore propose the RFNM algorithm to make \(x_k\) belong to more than one boundary if \(x_k\) does not belong to any lower approximation.
We have two ways to fuzzify RNM. The first is to introduce the fuzzification parameter m, and the second is to introduce the entropy term. These ways are known to be very useful. We call the method using the first way rough fuzzy non metric model (RFNM) and that using the second way entropy-regularized rough fuzzy non metric model (ERFNM), as mentioned above. In this paper, we describe RFNM.
5.2.1 Objective Function
We consider the following objective function of RFNM:
Here \(\underline{w}+\overline{w}=1\). \(D_{kt}\) means a dissimilarity between \(x_k\) and \(x_t\). The last entropy term means fuzzification of \(u_{ki}\) and makes the objective function nonlinear for \(u_{ki}\). Hence, the value of the optimal solution on \(u_{ki}\) that minimizes the objective function (22) is in [0, 1).
We assume the following conditions for \(\nu _{ki}\) and \(u_{ki}\):
From (C1)–(C3) in Sect. 2.2, we derive the following constraints:
From the above constraints, we derive the following relation for any k:
It is obvious that these relations are equivalent to (C1)–(C3) in Sect. 2.2.
5.2.2 Derivation of Optimal Solutions and Algorithm
Same as RNM, optimal solutions to \(\nu _{ki}\) and \(u_{ki}\) are obtained by comparing two cases for each \(x_k\):
-
1.
\(x_k\) belongs to the lower approximation \(\underline{A}_{p_k}\).
-
2.
\(x_k\) belongs to the boundaries of two clusters \(\overline{A}_{q^1_k}\) and \(\overline{A}_{q^2_k}\).
In the first case, let us assume that \(x_k\) belongs to the lower approximation \(\underline{A}_{p_k}\). \(p_k\) is derived as follows:
The objective function J is rewritten as follows:
Here
Note that \(D_{kk}=0\) and \(D_{kt}=D_{tk}\). Therefore
This means the following relations:
In this case, the value of the objective function is calculated as follows:
Here
In the second case, let us assume that \(x_k\) belongs to the boundaries of more than one cluster. The objective function J is convex for \(u_{ki}\), hence we derive an optimal solution to \(u_{ki}\) using a Lagrange multiplier.
Here we introduce the following Lagrange function with the constraint (26):
We partially differentiate L by \(u_{ki}\) and get the following equation:
\(\frac{\partial L}{\partial u_{ki}} = 0\), we obtain the following relation:
where
From the constraint (26) and the above Eq. (29), we get the following equation:
We then obtain the following optimal solution:
This means the following relations:
In this case, the value of the objective function is calculated as follows:
Here
In comparison with \(J_{k}^{\nu }\) and\(J_{k}^{u}\), we determine \(\nu _{ki}\) and \(u_{ki}\) as follows:
From the above discussion, we show the RFNM algorithm as Algorithm 7. The proposed algorithm is also constructed based on iterative optimization.
6 Conclusion
This paper showed various types of objective functions of objective-based rough clustering and their algorithm.
As mentioned above, many non-hierarchical clustering algorithms are based on optimization of some objective function. The reason is that we could choose the “better” output among many outputs from different initial values by comparing the value of the objective function of the output with each other. Lingras’s algorithm is almost only one algorithm with rough set representation inspired by KM, however it is not useful from the viewpoint that the algorithm is not based on any objective functions. Therefore, our proposed algorithms could be expected to be more useful in the field of rough clustering.
In objective-based clustering methods, the concept of classification function is very important. The classification function gives us belongingness of unknown datum to each cluster. It is impossible to derive the classification functions of our algorithms in this paper analytically, hence we can not show the functions explicitly. However, as we have seen, the value of the belongingness numerically. In future works, we will develop these discussion.
References
P. Lingras and G. Peters. Rough clustering. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, Vol. 1, Issue 1, pp. 64–72, pages 1207–1216, 2011.
P. Lingras and C. West. Interval set clustering of web users with rough \(k\)-means. Journal of Intelligent Information Systems, Vol. 23, No. 1, pp. 5–16, 2004.
R. O. Duda and P. E. Hart. Pattern Classification and Scene Analysis. John Wiley & Sons, New York, second edition, 1973.
J. B. MacQueen. Some methods for classification and analysis of multivariate observations. Proceedings of 5-th Berkeley Symposium on Mathematical Statistics and Probability, Berkeley, University of California Press, Vol. 1, pp. 281–297, 1967.
J. C. Dunn. A fuzzy relative of the isodata process and its use in detecting compact well-separated clusters. Journal of Cybernetics, Vol. 3, pp. 32–57, 1973.
J. C. Bezdek. Pattern recognition with fuzzy objective function algorithms. Plenum Press, New York, 1981.
Z. Pawlak. Rough sets. International Journal of Computer and Information Sciences, Vol. 11, No. 5, pp. 341–356, 1982.
M. Inuiguchi. Generalizations of rough sets: From crisp to fuzzy cases. Proceedings of Rough Sets and Current Trends in Computing, pp. 26–37, 2004.
Z. Pawlak. Rough classification. International Journal of Man-Machine Studies, Vol. 20, pp. 469–483, 1984.
S. Hirano and S. Tsumoto. An indiscernibility-based clustering method with iterative refinement of equivalence relations. Journal of Advanced Computational Intelligence and Intelligent Informatics, Vol. 7, No. 2, pp. 169–177, 2003.
S. Mitra, H. Banka, and W. Pedrycz. Rough-fuzzy collaborative clustering. IEEE Transactions on Systems Man, and Cybernetics, Part B, Cybernetics, Vol. 36, No. 5, pp. 795–805, 2006.
P. Maji and S. K. Pal. Rough set based generalized fuzzy \(c\)-means algorithm and quantitative indices. IEEE Transactions on System, Man and Cybernetics, Part B, Cybernetics, Vol. 37, No. 6, pp. 1529–1540, 2007.
G. Peters. Rough clustering and regression analysis. Proceedings RSKT’07, LNAI 2007, Vol. 4481, pp. 292–299, 2007.
S. Mitra and B. Barman. Rough-fuzzy clustering: An application to medical imagery. Rough Set and Knowledge Technology, LNCS 2008, Vol. 5009, pp. 300–307, 2008.
M. Roubens. Pattern classification problems and fuzzy sets, Fuzzy Sets and Systems, Vol. 1, pp. 239–253, 1978.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this chapter
Cite this chapter
Endo, Y., Kinoshita, N. (2017). Various Types of Objective-Based Rough Clustering. In: Torra, V., Dahlbom, A., Narukawa, Y. (eds) Fuzzy Sets, Rough Sets, Multisets and Clustering. Studies in Computational Intelligence, vol 671. Springer, Cham. https://doi.org/10.1007/978-3-319-47557-8_5
Download citation
DOI: https://doi.org/10.1007/978-3-319-47557-8_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-47556-1
Online ISBN: 978-3-319-47557-8
eBook Packages: EngineeringEngineering (R0)