Keywords

1 Introduction

Clustering aims at partitioning a data set of n points into k clusters. External index measures how similar a clustering solution is to a given ground truth (if available), or how similar two clustering solutions are. This kind of measure is needed in clustering ensemble, measuring stability and evaluating performance of clustering algorithms.

In clustering algorithms, one of the main challenges is to solve the global allocation of the clusters instead of just tuning the partition borders locally. Despite of this, most external cluster validity indexes calculate only point-level differences without any direct information about how similar the cluster-level structures are.

Rand index (RI) [1] and Adjusted Rand index (ARI) [2] count the number of pairs of data points that are partitioned consistently in both clustering solutions (A and B); if a pair of points is allocated in the same cluster in A, they should be allocated into the same cluster also in B. This provides estimation of point-level similarity but does not give much information about the similarity at cluster level.

More sophisticated methods operate at the cluster level. Mutual information (MI) [3] and Normalized mutual information (NMI) [4] measure the amount of information (conditional entropy) that can be obtained from a cluster in A using the clusters in B. Set-matching based measures include Normalized van Dongen (NVD) [5] and Criterion-H (CH) [6]. They match the clusters between A and B, and measure the amount of overlap between the clusters, see Fig. 1. However, all of them measure point-level differences. What is missing is a simple structural cluster-level measure.

Fig. 1.
figure 1

Principle of set-matching based external validity indexes. The values are the number of overlap between the clusters. Blue indicate clusters that would be selected for matching. Sample index values for this example would be NMI = 0.42, NVD = 0.20, CH = 0.20, CI = 0. (Color figure online)

Figure 2 demonstrates the situation with three clustering results: k-means (KM) [7], random swap (RS) [8] and agglomerative clustering (AC) [9]. The clustering structure of RS and AC is roughly the same, and their differences come mostly from the minor inaccuracies of the centroids in the agglomerative clustering. The result of the k-means has a more significant, structural difference: one centroid is missing at the top, and there are too many centroids at the bottom.

Fig. 2.
figure 2

Comparing three clustering results: Random Swap (RS), Agglomerative clustering (AC) and K-means (KM) with eight measures.

Some indexes (ARI, NVD, CH) indicate that RS-vs-AC in Fig. 2 are more similar (ARI = 0.91; NVD = 0.05; CH = 0.05) than RS-vs-KM (ARI = 0.88; NVD = 0.07; CH = 0.10), or AC-vs-KM (ARI = 0.82; NVD = 0.10; CH = 0.14) but the numbers do not tell that their global structure is the same.

In a recent work [10], we introduced a cluster level index called centroid index (CI) to cope with this problem by measuring the number of clusters allocated differently. For each centroid in A, the method finds its nearest centroid in B. Then it calculates the indegree-values for each centroid in B; how many times it was selected as the nearest. An orphan centroid (indegree = 0) indicates that a cluster is differently located in the clustering structure. The index value is the count of these orphans so that CI-value indicates the number of cluster-level differences. Value CI = 0 indicates that the clustering structures are identical, CI = 1 that one cluster mismatch, and so on.

The measure is somewhat rough as it ignores the point-level differences. A simple point-level extension called centroid similarity index (CSI) was therefore also considered. However, the main idea to measure the cluster level differences is best captured by the raw CI-value, or relative to the number of clusters: CI/k. In Fig. 2, CI-value clearly tells that RS and AC have similar clustering structure (CI = 0), and that k-means has one difference (CI = 1).

A limitation of CI is that we must have the centroids. It is possible to use other clustering models like k-medoids [11] and k-modes [12] by finding the nearest medoid/mode using the distances in the feature space. However, in more complex clustering models like Gaussian mixture model (GMM), density-based or arbitrary-shape clustering, it is not as simple.

In this paper, we show how to apply centroid index by using partitions instead of the centroids. For every cluster in A, we first find its most similar cluster in B by calculating the amount of overlap by Jaccard coefficient, which is the number of shared points divided by the total number of distinctive data points in the two clusters:

$$ J = \frac{{\left| {A_{i} \cap B_{j} } \right|}}{{\left| {A_{i} \cup B_{j} } \right|}} $$
(1)

where A i and B j are the matching (most similar) clusters.

However, we do not sum up the overlaps but we further analyze the nearest neighbor mapping. The cluster-to-cluster similarity is now determined at point level, but the overall value is still measured at the cluster level. The measure is calculated as the number of orphan clusters (indegree = 0) as in the original definition of centroid index. Thus, the method now generalizes from the centroid-model to other clustering models independent on how the cluster is represented: centroid, medoid, mode, or even no prototype used at all.

For example, in Fig. 1, the topmost clusters are mapped to each other, and the bottom-most clusters to each other. This results in mapping where all indegree = 1 for both clusters, and CI-value becomes 0. Therefore, these two solutions have the same clustering structure and they differ only at the point level.

2 External Cluster Validity

Existing cluster validity indexes can be divided into three categories: pair-counting, information theoretic and set-matching based measures:

  • Pair-counting measures

    • RI = Rand index [1]

    • ARI = Adjusted Rand index [2]

  • Information theoretic measures

    • MI = Mutual information [3]

    • NMI = Normalized Mutual information [4]

  • Set-matching based measures

    • NVD = Normalized van Dongen [5]

    • CH = Criterion H [6]

    • CI = Centroid Index [10]

    • CSI = Centroid Similarity Index [10]

We next briefly recall the idea of set-matching based methods [13]. The clusters are first either paired (NVD) or matched (CH and CI). In pairing, best pair of the clusters in A and B are found by minimizing the sum of the similarities of the paired clusters. Hungarian [13], or greedy algorithm [6, 14] has been used to solve it.

In matching, nearest neighbour is searched. This does not always result in bijective mapping where exactly two clusters are paired, but several clusters in A can be mapped to the same cluster in B (or vice versa), see Fig. 3. The mapping is not symmetric, and is usually done in both ways: AB and BA.

Fig. 3.
figure 3

Matching of clusters using pointwise similarities.

The similarities of the matched/paired clusters are then calculated by summing up the total overlap. In [13], the values are first normalized by the maximum cluster size. In NVD, CH and CSI, the normalization is performed after the summation by dividing the total number of shared points by the total number of points:

$$ S = \frac{1}{N} \cdot \sum\limits_{i = 1}^{k} {\left| {A_{i} \cap NN\left( {A_{i} } \right)} \right|} $$
(2)

where NN(A i ) is the cluster in B that A i is matched/paired with. Centroid index (CI) finds the nearest centroid without any use of the partitions:

$$ NN\left( {A_{i} } \right) = \mathop {\arg \hbox{min} }\limits_{1 \le j \le k} \left\| {c\left[ {A_{i} } \right] - c\left[ {B_{j} } \right]} \right\| $$
(3)

where c[A i ] and c[B j ] are the centroids of A and B. The other difference is that CI does not use the point-level measure (2). Instead, it calculates the number of mappings (indegree) made for each cluster B j, and then sums up the number of clusters in B that has not been mapped at all (indegree = 0). These are called orphans:

$$ CI_{1} (A \to B) = \sum\limits_{j = 1}^{k} {Orphan} \left( {B_{j} } \right) $$
(4)

where Oprhan(B) has value 1 if no cluster in A is mapped to it:

$$ Orphan\left( B \right) = \left\{ {\begin{array}{*{20}c} 1 & {InDegree\left( A \right) = 0} \\ 0 & {InDegree(A) > 0} \\ \end{array} } \right. $$
(5)

In order to have symmetric index, we perform mappings in both ways: AB and BA. The CI-value is then defined as the maximum of these two:

$$ CI\left( {A,B} \right) = \hbox{max} \left\{ {CI_{1} \left( {A \to B} \right),CI_{1} \left( {B \to A} \right)} \right\} $$
(6)

To sum up, the index is easy to calculate, and the result has clear interpretation: how many clusters are differently allocated. An example is shown in Fig. 4.

Fig. 4.
figure 4

Example of Centroid index (CI). Only mapping RED→PURPLE is shown. Some purple circles are under the red and not visible. (Color figure online)

3 Centroid Index Using Different Clustering Models

The main limitation of the centroid index is that it requires centroids, which may not exist in other clustering models such as k-medoids [11], k-modes [12], Gaussian mixture model (GMM), density-based and arbitrary-shape clustering. To apply the index to other models, we consider the following alternative approaches:

  • Prototype similarity

  • Partition similarity

  • Model similarity

3.1 Prototype vs. Partition Similarity

K-medoids and k-modes clustering operate also in a feature space (usually Euclidean), so generalization to them is trivial using the prototype similarity approach. We just find the nearest prototype in the other solution using the distance in the feature space. It does not matter whether the prototype is centroid, medoid or mode.

The second approach uses the partitions of the two clustering solutions. The matching of each cluster is done by finding the most similar cluster in the other solution using (1). This applies to any partition-based clustering and it can be calculated from the contingency table in O(N·k 2) time [2]. Contingency table tells how many shared points two clusters A i and B j have. See Table 1 for an example.

Table 1. Contingency table for the clusterings presented in Fig. 3. For instance, clusters A 1 shares 5 points with cluster B 1, 1 point with cluster B 2, and 2 points with cluster B 3.

This partition similarity approach applies also to centroid-based clustering but it may be slower to calculate. Finding the most similar prototype takes only O(dk 2), where d is the number of dimensions. This is expected to be faster than the O(N) + O(k 2) of the partition similarity; unless if the dimensionality or the number of clusters is very high. For S1–S4 data (http://cs.uef.fi/sipu/datasets) the estimated numbers are 450 distance calculations for the prototype similarity, and 4352 counter additions for the partition similarity. However, observed values in Table 2 show that the calculation of the squared distance takes so much longer that the speed benefit is practically lost. Thus, the partition-based variant is virtually as fast already when d = 2.

Table 2. Processing times for CI using prototype and partition similarity. The third row shows how many times they provided different CI-values. Data sets S1-S4 are from [15].

It is also possible that the two approaches provide different mappings. Even then, the resulting CI-value is expected to be mostly the same. Table 2 also reports the number of times the two approaches give different CI-value.

The third approach is to derive the similarity of clusters directly from the probability density function of the model. Next we study this for Gaussian mixture model.

3.2 Model Similarity

Gaussian mixture model (GMM) represents every cluster by its centroid and covariance matrix. This increases the size of the model from O(1) to O(d 2) per cluster. However, there is often not enough data to estimate the covariances reliably. A simplified variant therefore considers the diagonal of the covariance matrix, thus, reducing the model size to O(d). We also use this simplified variant here.

Expectation maximization (EM) [16] algorithm optimizes GMM analogous to k-means. It iterates Expectation and Maximization steps in turn to optimize loglikelihood. It also suffers the same problem as k-means: stucks into a local optimum. Better variants include split-and-merge [17], genetic algorithm [18] and random swap [19].

For comparing two clusters (mixtures), we can use any of the three approaches. Prototype similarity approach ignores the covariance and just finds the nearest centroid in the other solution. Partition similarity performs hard partition by assigning each point into the cluster with maximum likelihood, after which the partition similarity can be calculated using the contingency table.

For the model similarity, we use here Bhattacharyya coefficient. It measures the similarity between two probability distributions p and q, and is calculated as follows:

$$ S_{BC} = \sum {\sqrt {p_{i} \cdot q_{j} } } $$
(7)

For two multivariate normal distributions, it can be written as:

$$ S_{BC} = \frac{1}{8}\left( {c\left[ {A_{i} } \right] - c\left[ {B_{j} } \right]} \right)^{T} \varSigma^{ - 1} \left( {c\left[ {A_{i} } \right] - c\left[ {B_{j} } \right]} \right) + \frac{1}{2}\ln \left( {\frac{\left| \varSigma \right|}{{\sqrt {\left| {\varSigma_{1} } \right|\left| {\varSigma_{2} } \right|} }}} \right) $$
(8)

where c[A i ] and c[B j ] are the means (centroids), Σ1 and Σ2 are the covariance matrices of the two clusters A i and B j. and Σ is the average of Σ1 and Σ2. The first term in (8) represents the Mahalonobis distance, which is a special case of Bhattacharyya when the covariance matrixes of the two distributions are the same as is the case in GMM.

Figure 5 shows an example of two GMM models by SMEM [17] and RSEM [19] algorithms. We compare the results by calculating the various indexes from the resulting partitions. The result of CI was calculated by all the three approaches; all resulted into the same mapping, and gave exactly the same value CI = 2.

Fig. 5.
figure 5

Centroid index for Gaussian mixture model. Only mapping SMEM→RSEM is shown.

3.3 Arbitrary Partitions

Arbitrary-shape clustering differs from the model-based clustering in that there is no model or prototype for the clusters. This kind of clustering problem is often solved based on connectivity. For example, single link algorithm [20] results in a minimum spanning forest where each cluster is represented by the minimum spanning tree of the points in the cluster.

To compare such clustering results by CI, we use the partition similarity because it is independent on the chosen clustering model. All what is required is that we can access the partitions. However, in on-line clustering where huge amount of data is processed, the original data points might not be stored but deleted immediately when their contribution to the model is calculated. In this case, the partition-based similarity cannot be applied but in most offline applications, we do have access to partitions.

Examples of data with arbitrary-shape clusters is shown in Fig. 6 when clustered by k-means (left) and single link (right). K-means misses the two smaller clusters, and divides the bigger ones in the middle and on the top. Single link makes three mistakes in total, by merging three real clusters and by creating three false clusters of size one.

Fig. 6.
figure 6

K-means result for Aggregate (left) and Single Link result for Compound (right) when compared to the ground truth (GT). Mapping of the max value is shown (KM→GT; GT→SL).

4 Experiments

Here we provide numerical comparison of the centroid index (CI) against the selected existing indexes. We use the following data sets:

  • S1–S4 [15]: 5000 points in 15 clusters.

  • Unbalanced [13]: 6500 points in 8 clusters

  • Birch2 [23]: 100,000 points in 100 clusters

  • Aggregate [24]: 788 points in 7 clusters

  • Compound [25]: 399 points in 6 clusters

We apply the following algorithms: K-means (KM), K-means++ (KM++) [26], Random Swap (RS) [8], Genetic Algorithm (GA) [27] for the data with spherical clusters; Single Link (SL) [21], DBSCAN [21], Split-and-Merge (SAM) [22] for data with arbitrary-shape clusters. K-means is applied for both.

Same clustering results with the corresponding validity index values are given in Tables 3. Unbalanced is rather easy to cluster by good algorithm but k-means fails because random initialization tends to select all centroids from the bigger clusters to the left, and only one centroid will move to cover the five small clusters, see Fig. 7. Thus, leaving four other clusters empty, which results in CI = 4. Most other indexes react to this but their exact values tell very little about how severe the error is, whereas the CI-value tells that half (4/8) of the clusters are wrongly allocated.

Table 3. Sample clustering results with validity values. CI = 0 indicates correct structure.
Fig. 7.
figure 7

Datasets unbalance (above) and Birch2 (below) clustered by k-means.

The results for the data with arbitrary-shaped clusters are similar. DBSCAN and SAM work well for the Aggregate providing perfect clustering structure (CI = 0) although DBSCAN leaves out few points as outliers. Compound is more challenging and all the methods make 2 or 3 errors in the clustering structure, usually merging the leftmost clusters and creating too many on the right.

Finally, we study how the indexes react when randomness is added increasingly to artificially created partitions (for details see [13]). Figure 8 shows that the centroid index does not react at all for these point-level changes as long as most points keeps in the original cluster. The values of the set-based measures (NVD, CH, Purity) decrease linearly, which shows that they are most appropriate to measure point-level changes.

Fig. 8.
figure 8

Effect of increasing randomness in partitions to the clustering result.

5 Conclusions

Centroid Index (CI) is the only validity index that provides cluster level measure. It tells exactly how many clusters are differently allocated, which is more useful information than counting point-level differences. In this paper, we applied it to other clustering models such as Gaussian mixture and data with arbitrary-shaped clusters. Its main advantage is that the significance of the index value can be trivially concluded: value CI > 0 indicate that there is a significant difference in the clustering structure.