Abstract
How to economically cluster large-scale multi-view images is a long-standing problem in computer vision. To tackle this challenge, we introduce a novel approach named Highly-economized Scalable Image Clustering (HSIC) that radically surpasses conventional image clustering methods via binary compression. We intuitively unify the binary representation learning and efficient binary cluster structure learning into a joint framework. In particular, common binary representations are learned by exploiting both sharable and individual information across multiple views to capture their underlying correlations. Meanwhile, cluster assignment with robust binary centroids is also performed via effective discrete optimization under \(\ell _{21}\)-norm constraint. By this means, heavy continuous-valued Euclidean distance computations can be successfully reduced by efficient binary XOR operations during the clustering procedure. To our best knowledge, HSIC is the first binary clustering work specifically designed for scalable multi-view image clustering. Extensive experimental results on four large-scale image datasets show that HSIC consistently outperforms the state-of-the-art approaches, whilst significantly reducing computational time and memory footprint.
Z. Zhang, L. Liu and J. Qin—Equal contributions.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Image clustering is a commonly used unsupervised analytical technique for practical computer vision applications [17]. The aim of image clustering is to discover the natural and interpretable structure of image representations, so as to group images that are similar to each other into the same cluster. Based on the number of sources where images are collected or number of features how images are described, existing clustering methods can be divided into single-view image clustering (SVIC) [1, 16, 32, 36] and multi-viewFootnote 1 image clustering (MVIC) [3, 4, 22, 47, 48]. Recently, MVIC [3, 48, 51] has been evoking more and more attention due to the flexibility of extracting multiple heterogeneous features from a single image. Compared to SVIC, MVIC has access to more characteristics and structural information of the data, and the features from diverse views can potentially complement each other and produce more effective clustering performance.
Existing MVIC methods can be roughly divided into three groups: multi-view spectral clustering [19, 30, 31], multi-view matrix factorization [4, 22, 37], and multi-view subspace clustering [13, 45, 49]. Multi-view spectral clustering [47] constructs multiple similarity graphs to achieve a common or similar eigenvector matrix on all views, and then generates consensus data partitions, which hinge crucially on the single-view spectral clustering [29]. Due to the straightforward interpretability of matrix factorization [20], multi-view matrix factorization methods [4, 22] integrate information from multiple views towards a compatible common consensus, or decompose the heterogeneous features into specified centroid and cluster indicator matrices. Different from the above strategies, multi-view subspace clustering [13] employs the complementary properties across multiple views to uncover the common latent subspace and quantify the genuine similarities. Some other kernel-based MVIC methods [10, 42] exploit a linear or a non-linear kernel on each view. Note that SVIC (e.g., k-means [16] and spectral clustering [29]) can also be leveraged to deal with multi-view clustering problem. A common practice for them is to perform clustering on either any single-view feature or simply concatenated multiple features [47, 48].
Although SVIC and MVIC methods have achieved much progress on small- and middle-scale data, both of them will become intractable (because of unaffordable computation and memory overhead) when dealing with large-scale data with high dimensionality, which is a typical case in the era of ‘big data’. As pointed out in [15, 41], we argue that real-valued features are the essential bottleneck restricting the scalability of existing clustering methods. To address this issue, inspired by the recent advances on compact binary coding (a.k.a. hashing) [5, 23, 24, 27, 34, 39, 40, 43], we aim to develop a feasible binary clustering technique for large-scale MVIC. Specifically, we transform the original real-valued Euclidean space to the low-dimensional binary Hamming space, based on which an efficient clustering solution can then be devised. In this way, time-consuming Euclidean distance measures (typically of \(\mathcal {O}(Nd)\) complexity, where N and d respectively indicate the data size and dimension) for real-valued data can be substantially eliminated by the extremely fast XOR operations (of \(\mathcal {O}\)(1) complexity) for compact binary codes. Note that the proposed method is also potentially promising in practical use cases where computation and memory resources are limited (e.g., on wearable or mobile devices).
As shown in Fig. 1, we particularly develop a Highly-economized Scalable Image Clustering (HSIC) framework for efficient large-scale MVIC. HSIC jointly learns the effective common binary representations and robust discrete cluster structures. The former can maximally preserve both sharable and view-specific/individual information across multiple views; the latter can significantly promote the computational efficiency and robustness of clustering. The joint learning strategy is superior to separately learning each objective by facilitating the collaboration between both objectives. An efficient alternating optimization algorithm is developed to address the joint discrete optimization problem. The main contributions of this work include:
-
(1)
To the best of our knowledge, HSIC is the pioneering work with large-scale MVIC capability, where common binary representations and robust binary cluster structures can be obtained in a unified learning framework.
-
(2)
HSIC captures both sharable and view-specific information from multiple views to fully exploit the complementation and individuality of heterogeneous image features. The sparsity-induced \(\ell _{21}\)-norm is imposed on the clustering model to further alleviate its sensitivity against outliers and noise.
-
(3)
Extensive experimental results on four image datasets clearly show that HSIC can reduce the memory footprint and computational time up to 951 and 69.35 times respectively over the classical k-means algorithm, whilst consistently outperform the state-of-the-art approaches.
Notably, two works [15, 41] in the literature are most relevant to ours. [15] introduced a two-step binary k-means approach, in which clustering is performed on the binary codes obtained by Iterative Quantization (ITQ) [14], and [41] integrated binary structural SVM and k-means. Our HSIC fundamentally differs from them in the following aspects: (1) [15] and [41] are SVIC methods, while HSIC is specially designed for MVIC; (2) [15] divides the clustering task into two unconnected procedures, which completely eliminate the important tie between the binary coding and cluster structure learning. Meanwhile, the binary codes learned by [41] are too weak to achieve satisfactory results because of lacking adequate representative capability. More importantly, both methods cannot make full use of the complementary properties of multiple views for scalable MVIC, which is also shown in [50].
In the next section, we will introduce the detailed framework of our HSIC and then elaborate on the alternating optimization algorithm. The analysis in terms of computational complexity and memory load will also be presented.
2 Highly-Economized Scalable Image Clustering
Suppose we have a set of multi-view image features \(\mathcal X\) = \(\{\varvec{X}^1,\cdots ,\varvec{X}^m\}\) from m views, where \(\varvec{X}^v = [\varvec{x}_1^v, \cdots , \varvec{x}_N^v] \in \mathfrak {R}^{d_v \times N}\) is the accumulated feature matrix from the v-th view. \(d_v\) and N denote the dimensionality and the number of data points in \(\varvec{X}^v\), respectively. \(\varvec{x}_i^v \in \mathfrak {R}^{d_v \times 1}\) is the i-th feature vector from the v-th view. The main objective of unsupervised MVIC is to partition \(\mathcal X\) into c groups, where c is the number of clusters. In this work, to address the large-scale MVIC problem, our HSIC aims to perform binary clustering in the much lower-dimensional Hamming space. Particularly, we perform multi-view compression (i.e., project multi-view features onto the common Hamming space) by learning the compatible common binary representation via the complimentary characteristics of multiple views. Meanwhile, robust binary cluster structures are formulated in the learned Hamming space for efficient clustering.
As a preprocessing step, we first normalize the features from each view as zero-centered vectors. Inspired by [26, 40], in this work, each feature vector is encoded by the simple nonlinear RBF kernel mapping, i.e., \(\psi (\varvec{x}_i^v) = [exp(-\Vert \varvec{x}_i^v - \varvec{a}_1^v\Vert ^2/\gamma ),\cdots , exp(-\Vert \varvec{x}_i^v - \varvec{a}_l^v\Vert ^2/\gamma )]^{\top }\), where \(\gamma \) is the pre-defined kernel width, and \(\psi (\varvec{x}_i^v)\in \mathfrak {R}^{l\times 1}\) denotes an l-dimensional nonlinear embedding for the i-th feature from the v-th view. Similar to [25, 26, 40], \(\{\varvec{a}_i^v\}_{i=1}^l\) are randomly selected l anchor points from \(\varvec{X}^v\) (\(l=1000\) is used for each view in this work). Subsequently, we will introduce how to learn the common binary representation and robust binary cluster structure respectively, and finally end up with a joint learning objective.
(1) Common Binary Representation Learning. We consider a family of K hashing functions to be learned in HSIC, which quantize each \(\psi (\varvec{x}_i^v)\) into a binary representation \(\varvec{b}_i^v = [b_{i1}^v,\cdots ,b_{iK}^v]^T \in \{-1,1\}^{K\times 1}\). To eliminate the semantic gaps between different views, HSIC generates the common binary representation by combining multi-view features. Specifically, HSIC simultaneously projects features from multiple views onto a common Hamming space, i.e., \(\varvec{b}_i = sgn\big ((\varvec{P}^v)^{\top } \psi (\varvec{x}_i^v)\big )\), where \(\varvec{b}_i\) is the common binary code of the i-th features from different views (i.e., \(\varvec{x}_i^v\), \(\forall v=1,...,m\)), \(sgn(\cdot )\) is an element-wise sign function, \( \varvec{P}^v = [\varvec{p}_1^v, \cdots , \varvec{p}_K^v] \in \mathfrak {R}^{l\times K}\) is the mapping matrix for the v-th view and \(\varvec{p}_i^v\) is the projection vector for the i-th hashing function. As such, we construct the learning function by minimizing the following quantization loss:
Since different views describe the same subject from different perspectives, the projection \(\{\varvec{P}^v\}_{v=1}^m\) should capture the shared information that maximizes the similarities of multiple views, as well as the view-specific/individual information that distinguishes individual characteristics between different views. To this end, we decompose each projection into the combination of sharable and individual projections, i.e., \(\varvec{P}^v = [\varvec{P}_S, \varvec{P}_I^v]\). Specifically, \(\varvec{P}_S \in \mathfrak {R}^{l\times K_S}\) is the shared projection across multiple views, while \(\varvec{P}_I^v \in \mathfrak {R}^{l\times K_I}\) is the individual projection for the v-th view, where \(K = K_S+K_I\). Therefore, HSIC collectively learns the common binary representation from multiple views using
where \(\varvec{B} = [\varvec{b}_1,\cdots ,\varvec{b}_N]\), \(\varvec{\alpha }=[\alpha ^1,\cdots ,\alpha ^m]\in \mathfrak {R}^m\) weighs the importance of different views, \(r>1\) is a constant managing the weight distributions, and \(\lambda _1\) is a regularization parameter. The second term is a regularizer to control the parameter scales.
Moreover, from the information-theoretic point of view, the information provided by each bit of the binary codes needs to be maximized [2]. Based on this point and motivated by [14, 44], we adopt an additional regularizer for the binary codes \(\varvec{B}\) using maximum entropy principle, i.e., \(\max ~var[\varvec{B}] = var[sgn\big ((\varvec{P}^v)^{\top } \psi (\varvec{x}_i^v)\big )]\). This additional regularization on \(\varvec{B}\) can ensure the balanced partition and reduce the redundancy of the binary codes. Here we replace the sign function by its signed magnitude, and formulate the relaxed regularization as follows
Finally, we combine problems (2) and (3) together and reformulate the overall common binary representation learning problem as the following
where \(\lambda _2\) is a weighting parameter.
(2) Robust Binary Cluster Structure Learning. For binary clustering, HSIC directly factorizes the learned binary representation \(\varvec{B}\) into the binary clustering centroids \(\varvec{Q}\) and discrete clustering indicators \(\varvec{F}\) using
where \(\Vert \varvec{A}\Vert _{21} = \sum _i\Vert \varvec{a}^i\Vert _2\), and \(\varvec{a}^i\) is the i-th row of matrix \(\varvec{A}\). The first constraint of (5) ensures the balanced property on the clustering centroids as with the binary codes. Note that the \(\ell _{21}\)-norm imposed on the loss function can also be replaced by the F-norm, i.e., \(\Vert \varvec{B} - \varvec{Q}\varvec{F}\Vert _F^2\). However, the F-norm based loss function can amplify the errors induced from noise and outliers. Therefore, to achieve more stable and robust clustering performance, we employ the sparsity-induced \(\ell _{21}\)-norm. It is also observed in [12] that the \(\ell _{21}\)-norm not only preserves the rotation invariance within each feature, but also controls the reconstruction error, which significantly mitigates the negative influence of the representation outliers.

(3) Joint Objective Function. To preserve the semantic interconnection between the learned binary codes and the robust cluster structures, we incorporate the common binary representation learning and the discrete cluster structure constructing into a joint learning framework. In this way, the unified framework can interactively enhance the qualities of the learned binary representation and cluster structures. Hence, we have the following joint objective function:
where \(\lambda _1\), \(\lambda _2\) and \(\lambda _3\) are trade-off parameters to balance the effects of different terms. To optimize the difficult discrete programming problem, a newly-derived alternating optimization algorithm is developed as shown in the next section.
2.1 Optimization
The solution to problem (6) is non-trivial as it involves a mixed binary integer program with three discrete constraints, which lead to an NP-hard problem. In the following, we introduce an alternating optimization algorithm to iteratively update each variable while fixing others, i.e., update \(\varvec{P}_s\rightarrow \varvec{P}_I^v \rightarrow \varvec{B} \rightarrow \varvec{Q} \rightarrow \varvec{F} \rightarrow \varvec{\alpha }\) in each iteration.
Due to the intractable \(\ell _{21}\)-norm loss function, we first rewrite the last term in (6) as \(\lambda _3 tr\big (\varvec{U}^{\top } \varvec{D}\varvec{U}\big )\), where \(\varvec{U} = \varvec{B} - \varvec{Q}\varvec{F}\), and \(\varvec{D} \in \mathfrak {R}^{K\times K}\) is a diagonal matrix, the i-th diagonal element of which is defined as \(\varvec{d}_{ii}\) = \(1/ 2\Vert \varvec{u}^i\Vert \), where \(\varvec{u}^i\) is the i-th row of \(\varvec{U}\).
(1) \(\varvec{P}_s\)-Step: When fixing other variables, we update the sharable projection by
For notational convenience, we rewrite \(\psi (\varvec{X}^v)\psi ^{\top }(\varvec{X}^v)\) as \(\tilde{\varvec{X}}\). Taking derivation of \(\mathcal L\) with respect to \(\varvec{P}_s\) and let \(\frac{\partial \mathcal L}{\partial \varvec{P}_s} = 0\), we can obtain the closed-form solution of \(\varvec{P}_s\), i.e.,
where \(\varvec{A} = (1-\frac{\lambda _2}{N})\sum _{v=1}^m (\alpha ^v)^r \tilde{\varvec{X}}\) and \(\varvec{T} = \sum _{v=1}^m (\alpha ^v)^r \psi (\varvec{X}^v)\).
(2) \(\varvec{P}_I^v\)-Step: Similarly, when fixing other parameters, the optimal solution of the v-th individual projection matrix can be determined by solving
and its closed-form solution can be obtained by \(\varvec{P}_I^v = \varvec{W}\psi (\varvec{X}^v)\varvec{B}^{\top }\), where \(\varvec{W} = \left( (1-\frac{\lambda _2}{N})\tilde{\varvec{X}}+\lambda _1 \varvec{I}\right) ^{-1}\) can be calculated beforehand.
(3) \(\varvec{B}\)-Step: Problem (6) w.r.t. \(\varvec{B}\) can be rewritten as:
Since \(\varvec{B}\) only has ‘1’ and ‘-1’ entries and \(\varvec{D}\) is a diagonal matrix, both \(tr(\varvec{B} \varvec{B}^{\top })\) = \(tr(\varvec{B}^{\top }\varvec{B}) = KN\) and \(tr\left( \varvec{B}^{\top }\varvec{DB}\right) \) = \(N*tr(\varvec{D})\) are constant terms w.r.t. \(\varvec{B}\). Based on this and with some further algebraic computations, (10) can be reformulated as

where ‘const’ denotes the constant terms. This problem has a closed-form solution:
(4) \(\varvec{Q}\)-Step: First, we degenerate (6) into the following computationally feasible problem (by removing some irrelevant parameters and discarding the first constraint):
With sufficiently large \(\nu > 0\), problems (6) and (13) will be equivalent. Then, by fixing the variable \(\varvec{F}\), problem (13) becomes
Inspired by the efficient discrete optimization algorithm in [35, 38], we develop an adaptive discrete proximal linearized optimization algorithm, which iteratively updates \(\varvec{Q}\) in the (p+1)-th iteration by \(\varvec{Q}^{p+1} = sgn(\varvec{Q}^p-\frac{1}{\eta }\nabla \mathcal L(\varvec{Q}^p))\), where \(\nabla \mathcal L(\varvec{Q})\) is the gradient of \(\mathcal L(\varvec{Q})\), \(\frac{1}{\eta }\) is the learning step size and \(\eta \in (C,2C)\), where C is the Lipschitz constant. Intuitively, for the very special \(sgn(\cdot )\) function, if the step size \(1/\eta \) is too small/large, the solution of \(\varvec{Q}\) will get stuck in a bad local minimum or diverge. To this end, a proper \(\eta \) is adaptively determined by enlarging or reducing based on the changing values of \(\mathcal L(\varvec{Q})\) between adjacent iterations, which can accelerate its convergence.
(5) \(\varvec{F}\)-Step: Similarly, when fixing \(\varvec{Q}\), the problem w.r.t. \(\varvec{F}\) turns into
We can divide the above problem into N subproblems, and independently optimize the cluster indicator in a column-wise fashion. That is, one column of \(\varvec{F}\) (i.e., \(\varvec{f}_i\)) is computed at each time. Specifically, we solve the subproblems in an exhaustive search manner, similar to the conventional k-means algorithm. Regarding the i-th column \(\varvec{f}_i\), the optimal solution of its j-th entry can be efficiently obtained by
where \(\varvec{q}_{\wp }\) is the \(\wp \)-th vector of \(\varvec{Q}\), and \(H(\cdot ,\cdot )\) denotes the Hamming distance metric. Note that computing the Hamming distance is remarkably faster than the Euclidean distance, so the assigned vector \(\varvec{f}_i\) will efficiently constitute the matrix \(\varvec{F}\).
(6) \(\varvec{\alpha }\)-Step: Let \(\displaystyle h^v\) = \(\Vert \varvec{B}-\left( \varvec{P}^v\right) ^{\top }\phi (\varvec{X}^v)\Vert _F^2 + \lambda _1 \Vert \varvec{P}^v ||_F^2 -\lambda _2 g(\varvec{P}^v)\), then problem (6) w.r.t. \(\varvec{\alpha }\) can be rewritten as
The Lagrange function of (17) is \(\min \mathcal L(\alpha ^v,\varvec{\zeta }) = \sum _{v=1}^{m} (\alpha ^v)^r h^v-\varvec{\zeta }(\sum _{v=1}^{m} \alpha ^v-1)\), where \(\varvec{\zeta }\) is the Lagrange multiplier. Taking the partial derivatives w.r.t. \(\alpha ^v\) and \(\varvec{\zeta }\), respectively, we can get
Following [47], by setting \(\nabla _{\alpha ^v,\varvec{\zeta }} \mathcal L\) = \(\varvec{0}\), the optimal solution of \(\alpha ^v\) is \(\frac{(h^v)^{\frac{1}{1-r}}}{\sum _v(h^v)^{\frac{1}{1-r}}}\).
To obtain the locally optimal solution of problem (6), we update the above six variables iteratively until convergence. To deal with the out-of-example problem in image clustering, HSIC needs to generate the binary code for a new query image \(\hat{\varvec{x}}\) from the v-th view (i.e., \(\hat{ \varvec{x}}^v\)) by \({\varvec{b}}^{v} = sgn\left( (\varvec{P}^v)^{\top }\psi (\hat{\varvec{x}}^v)\right) \), and then assigns it to the j-th cluster decided by \(j = \arg \min _k H({\varvec{b}}^{v},\varvec{q}_k)\) in the fast Hamming space. For multi-view clustering, the common binary code of \(\hat{\varvec{x}}\) is \(\varvec{b} =sgn \left( \sum _{v=1}^{m}(\varvec{\alpha }^v)^r (\varvec{P}^v)^{\top }\psi (\hat{\varvec{x}}^v)\right) \). Then the optimal cluster assignment of \(\hat{\varvec{x}}\) is determined by the solution of \(\varvec{F}\). The full learning procedure of HSIC is illustrated in Algorithm 1.
2.2 Complexity and Memory Load Analysis
(1) The major computation burden of HSIC lies in the compressive binary representation learning and robust discrete cluster structures learning. The computational complexities of calculating \(\varvec{P}_S\) and \(\varvec{P}_I^v\) are \(\mathcal {O}(K_SlN)\) and \(\mathcal {O}(m(K_IlN))\), respectively. Computing \(\varvec{B}\) consumes \(\mathcal {O}(KlN)\). Similar to [15], constructing the discrete cluster structures needs \(\mathcal {O}(N)\) on bit-wise operators for \(\kappa \) iterations, where the distance computation requires only \(\mathcal {O}(1)\) per time. The total computational complexity of HSIC is \(\mathcal {O}(t((K_S+mK_I+K)lN+\kappa N))\), where t and \(\kappa \) are empirically set to 10 in all the experiments. In general, the computational complexity of optimizing HSIC is linear to the number of samples, i.e., \(\mathcal {O}(N)\). (2) For memory cost in HSIC, it is unavoidable to store the mapping matrices \(\varvec{P}_s\) and \(\varvec{P}_I^v\), demanding \(\mathcal {O}(lK_S)\) and \(\mathcal {O}(lK_I)\) memory costs, respectively. Notably, the learned binary representation and discrete cluster centroids only need the bit-wise memory load \(\mathcal {O}(K(N+c))\), which is much less than that of k-means requiring \(\mathcal {O}(d(N+c))\) real-valued numerical storage footprint.
3 Experimental Evaluation
In this section, we conducted multi-view image clustering experiments on four scalable image datasets to evaluate the effectiveness of HSIC with four frequently-used performance measures. All the experiments are implemented based on Matlab 2013a using a standard Windows PC with an Intel 3.4 GHz CPU.
3.1 Experimental Settings
Datasets and Features: We perform experiments on four image datasets, including ILSVRC2012 1K [11], Cifar-10 [18], YouTube Faces (YTBF) [46] and NUS-WIDE [9]. Specifically, we randomly select 10 classes from ILSVRC2012 1 K with 1, 300 images per class, denoted as ImageNet-10, for middle-scale multi-view clustering study. Cifar-10 contains 60, 000 tiny color images in 10 classes, with 6, 000 images per class. A subset of YTBF contains 182, 881 face images from 89 different people (\(>1,200\) for each one). Similar to [38], we collect the subset of NUS-WIDE including the 21 most frequent concepts, resulting in 195, 834 images with at least 3, 091 images per category. Because some images in NUS-WIDE were labeled by multiple concepts, we only select one of the most representative labels as their true categories for simplicity. Multiple features are extracted on all datasets. Specifically, for ImageNet-10, Cifar-10 and YTBF, we use three different types of features, i.e., 1450-d LBP, 1024-d GIST, and 1152-d HOG. For NUS-WIDE, five publicly available features are employed for experiments, i.e., 64-d color Histogram (CH), 225-d color moments (CM), 144-d color correlation (CORR), 73-d edge distribution (EDH) and 128-d wavelet texture (WT).
Metrics and Parameters: We adopt four widely-used evaluation measures [28] for clustering, including clustering accuracy (ACC), normalized mutual information (NMI), purity, and F-score. In addition, both computational time and memory footprint are compared to show the efficiency of HSIC. To fairly compare different methods, we run the provided codes with default or fine-tuned parameter settings according to the original papers. For binary clustering methods, 128-bit code length is used for all datasets. For hyper-parameters \(\lambda _1\), \(\frac{\lambda _2}{N}\), and \(\lambda _3\) of HSIC, we first employ the grid search strategy on ImageNet-10 to find the best values (i.e., \(10^{-3}\), \(10^{-3}\), and \(10^{-5}\), respectively), which are then directly adopted on other datasets for simplicity. We empirically set r and \(\delta = \frac{K_S}{K}\) (i.e., the ratio of shared binary codes) as 5 and 0.2 respectively in all experiments. The multi-view clustering results are denoted as ‘MulView’. We report the averaged clustering results with 10 times randomly initialization for each method.
We conduct the following experiments from three perspectives. Firstly, we verify various characteristics of HSIC on the middle-scale dataset, i.e., ImageNet-10. Here we compare HSIC with both SVIC and MVIC methods (including real-valued and binary ones). Secondly, three large-scale datasets are exploited to evaluate HSIC on the challenging large-scale MVIC problem. Remark: Based on the results on ImageNet-10 (see Table 2), the real-valued MVIC methods only obtain comparable results to k-means, but they are very time-consuming. Moreover, when applying those MVIC methods (e.g., AMGL and MLAN) to larger datasets, we encounter the ‘out-of-memory’ error. Therefore, the real-valued MVIC methods are not compared on the three large-scale datasets. Thirdly, some empirical analyses of our HSIC are also provided.
3.2 Experiments on the Middle-Scale ImageNet-10
We compare HSIC to several state-of-the-art clustering methods including SVIC methods (i.e., k-means [16], k-Medoids [33], Approximate kernel k-means [8], Nyström [6], NMF [20], LSC-K [7]), MVIC methods, (i.e., AMGL [31], MVKM [4], MLAN [30], MultiNMF [22], OMVC [37], MVSC [21]) and two existing binary clustering methods (i.e., ITQ+bk-means [15] and CKM [41]). Additionally, two variants of HSIC are also compared to show its efficacy, i.e., HSIC with F-norm regularized binary clustering (HSIC-F), and HSIC with two separate steps of binary code learning and discrete clustering (HSIC-TS). Similar to [21, 22], for all the SVIC methods, we simply concatenate the feature vectors of all views for the ‘MulView’ clustering.

Table 1 demonstrates the performance of all clustering methods. From Table 1, we can observe in most cases that our HSIC can achieve comparable SVIC results but superior MVIC results in comparison with all the real-valued and binary clustering methods. This indicates the effectiveness of HSIC on the common representation learning and robust cluster structures learning, especially for the MVIC cases. Furthermore, it is clear that HSIC is superior to HSIC-F and HSIC-ST, which demonstrates the robustness and effectiveness of the joint learning framework.
The computational costs are illustrated in Table 2. From its last three columns, we can see that the binary clustering methods can reduce the computational time compared with the real-valued ones such as k-means and LSC-K, due to the highly efficient distance calculation in the Hamming space. Particularly, our HSIC is much faster than the compared real-valued and binary clustering methods, which also proves the superiority of the developed efficient optimization algorithm. Specifically, the speed-up of our HSIC for MVIC is very clear by a margin of 40.20 times compared to k-means. For memory footprint, k-means and our HSIC respectively require 361 MB and 2.73 MB, i.e., \( \approx 132\) times memory can be reduced using HSIC.

Why does HSIC Outperform the Real-Valued Methods? Table 1 clearly shows that HSIC achieves competitive or superior clustering performance compared to the real-valued clustering methods. The favorable performance mainly comes from: (1) HSIC greatly benefits from the proposed effective discrete optimization algorithm such that the learned binary representations can eliminate some redundant and noisy information in the original real-valued features. As can be seen in Fig. 2, the similarity structures of the same clusters are enhanced in the coding space, meanwhile, some disturbances from the original features are excluded to refine the learned representation. (2) For image clustering, binary features are more robust to local changes since small variations caused by varying environments can be eliminated by quantized binary codes. (3) HSIC is a unified interactive learning framework of the optimal binary codes and clustering structures, which is shown to be better than those disjoint learning approaches (e.g., LSC-K, NMF, MVSC, AMGL and MLAN).
3.3 Experiments on Large-Scale Datasets
To show the strong scalability of HSIC on the large-scale MVIC problem, we compare HSIC with several state-of-the-art scalable clustering methods on three large-scale multi-view datasets. The clustering performance is summarized in Table 3. Given these results, we have the following observations: (1) Generally, MVIC performs better than SVIC, which implies the necessity of incorporating complementary traits of multiple features for image clustering. Particularly, our HSIC achieves competitive or better SVIC results but consistent best MVIC performance. This mainly owes to the adaptive weights learning strategy and the exploiting of sharable and individual information from heterogeneous features. (2) From the last three columns of Table 3, we can observe that HSIC and its variants tend to be better than the real-valued ones. This shows that the binary codes learned by HISC are competitive to the real-valued ones. (3) When comparing to HSIC-TS and HSIC-F, HSIC in most cases achieves superior performance. This further reflects the advantages of the unified learning strategy and robust binary cluster structure construction.
The comparisons of running time and memory footprint are illustrated in Tables 4 and 5, respectively. From Table 4, we can observe that our HSIC is the fastest method in most cases. Table 5 shows that HSIC significantly reduces the memory load for large-scale MVIC compared to k-means. The memory cost of HSIC is similar to other binary clustering methods but clearly less than the real-valued methods. Moreover, as shown in Tables 4 and 5, for MVIC on NUS-WIDE with 5 views, HSIC can cluster near one million (\(195,834\times 5\)) features in 81 seconds using only 5.52 MB memory, while k-means needs about 29 minutes with 961 MB memory. Thus, HSIC can effectively address large-scale MVIC with much less computational time and memory footprint.
3.4 Empirical Analysis
Component Analysis: We evaluate the effectiveness of different components of HSIC in Fig. 3. Specifically, in addition to ‘HSIC-TS’ and ‘HSIC-F’, we have ‘HSIC-U’ by removing the balanced and independence constraints on binary codes and clustering centroids. HSIC-‘view’ and ITQ-‘view’ respectively refer to the SVIC results obtained using HSIC and ITQ+bk-means on the ‘view’-specific features. From Fig. 3, we can observe that each component contributes essentially to the enhanced performance, and lacking any component will deteriorate the performance.
Effect of Code Length: We show our performance changes with the increasing code lengths in Fig. 3. In general, longer codes may provide more information for higher clustering performance. Specifically, both ITQ and HSIC based methods tend to achieve improved performance with increasing numbers of bits. Moreover, HSIC-based methods are superior to the baseline k-means when the code length is larger than 32. The best clustering results are established by HSIC w.r.t. different code lengths, because HSIC can effectively coordinate the importance of different views and mine the semantic correlations between them.
Effect of Number of Clusters: All the above experiments are evaluated based on the ground-truth cluster numbers. However, if the number of clusters is unknown, how will the performance change with different cluster numbers? To this end, we perform experiments on Cifar-10 to evaluate the stabilities of different methods w.r.t. number of clusters. Figure 4 illustrates the performance changes by varying the cluster numbers from 5 to 40 with an interval of 5. Interestingly, the performance (i.e., ACC, NMI and F-score) of HSIC-based methods increases when the cluster number increases from 5 to 10, but then sharply drops using more than 10 clusters. This suggests that 10 is the optimal number of clusters. Notably, ‘purity’ can not trade off the precise clustering evaluation against the number of clusters [28]. Importantly, the clustering performance of HSIC in most cases is better than all the compared methods, and HSIC-based methods hold the first three best results. This shows that HSIC is adaptive to different cluster numbers and can be potentially used to predict the ‘optimal’ number of clusters.
4 Conclusion
In this paper, we proposed a highly-economized multi-view clustering framework, dubbed HSIC, to jointly learn the compressive binary representations and robust discrete cluster structures. Specifically, HSIC collaboratively integrated the heterogeneous features into the common binary codes, where the sharable and individual information of multiple views were exploited. Meanwhile, a robust cluster structure learning model was developed to improve the clustering performance. Moreover, an effective alternating optimization algorithm was introduced to guarantee the high-quality discrete solutions. Extensive experiments on large-scale multi-view datasets demonstrate the superiority of HSIC over the state-of-the-art methods in terms of clustering performance with significantly reduced computational time and memory footprint.
Notes
- 1.
Despite ‘multi-view’ can refer to multiple features, domains or modalities, in this paper, we solely focus on the clustering problem for images with multiple features (e.g., LBP, HOG and GIST).
References
Avrithis, Y., Kalantidis, Y., Anagnostopoulos, E., Emiris, I.Z.: Web-scale image clustering revisited. In: ICCV (2015)
Baluja, S., Covell, M.: Learning to hash: forgiving hash functions and applications. Data Mining Knowl. Discov. 17(3), 402–430 (2008)
Bickel, S., Scheffer, T.: Multi-view clustering. In: ICDM (2004)
Cai, X., Nie, F., Huang, H.: Multi-view k-means clustering on big data. In: IJCAI (2013)
Chen, J., Wang, Y., Qin, J., Liu, L., Shao, L.: Fast person re-identification via cross-camera semantic binary transformation. In: CVPR (2017)
Chen, W.Y., Song, Y., Bai, H., Lin, C.J., Chang, E.Y.: Parallel spectral clustering in distributed systems. IEEE TPAMI 33(3), 568–586 (2011)
Chen, X., Cai, D.: Large scale spectral clustering with landmark-based representation. In: AAAI (2011)
Chitta, R., Jin, R., Havens, T.C., Jain, A.K.: Approximate kernel k-means: solution to large scale kernel clustering. In: SIGKDD (2011)
Chua, T.S., Tang, J., Hong, R., Li, H., Luo, Z., Zheng, Y.: NUS-WIDE: a real-world web image database from national university of Singapore. In: ACM International Conference on Image and Video Retrieval (2009)
De Sa, V.R., Gallagher, P.W., Lewis, J.M., Malave, V.L.: Multi-view kernel construction. Mach. Learn. 79(1–2), 47–71 (2010)
Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Li, F.F.: ImageNet: a large-scale hierarchical image database. In: CVPR (2009)
Ding, C., Zhou, D., He, X., Zha, H.: R1-PCA: rotational invariant \(\ell \)1-norm principal component analysis for robust subspace factorization. In: ICML (2006)
Gao, H., Nie, F., Li, X., Huang, H.: Multi-view subspace clustering. In: ICCV (2015)
Gong, Y., Lazebnik, S., Gordo, A., Perronnin, F.: Iterative quantization: a procrustean approach to learning binary codes for large-scale image retrieval. IEEE TPAMI 35(12), 2916–2929 (2013)
Gong, Y., Pawlowski, M., Yang, F., Brandy, L., Bourdev, L., Fergus, R.: Web scale photo hash clustering on a single machine. In: CVPR (2015)
Hartigan, J.A., Wong, M.A.: Algorithm as 136: a k-means clustering algorithm. J. R. Stat. Soc. Ser. C 28(1), 100–108 (1979)
Jain, A.K.: Data clustering: 50 years beyond k-means. PRL 31(8), 651–666 (2010)
Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images. Technical report (2009)
Kumar, A., Rai, P., Daume, H.: Co-regularized multi-view spectral clustering. In: NIPS (2011)
Lee, D.D., Seung, H.S.: Algorithms for non-negative matrix factorization. In: NIPS (2001)
Li, Y., Nie, F., Huang, H., Huang, J.: Large-scale multi-view spectral clustering via bipartite graph. In: AAAI (2015)
Liu, J., Wang, C., Gao, J., Han, J.: Multi-view clustering via joint nonnegative matrix factorization. In: ICDM (2013)
Liu, L., Shao, L.: Sequential compact code learning for unsupervised image hashing. IEEE TNNLS 27(12), 2526–2536 (2016)
Liu, L., Yu, M., Shao, L.: Latent structure preserving hashing. IJCV 122(3), 439–457 (2017)
Liu, W., Mu, C., Kumar, S., Chang, S.F.: Discrete graph hashing. In: NIPS (2014)
Liu, W., Wang, J., Kumar, S., Chang, S.F.: Hashing with graphs. In: ICML (2011)
Lu, J., Liong, V.E., Zhou, J.: Simultaneous local binary feature learning and encoding for homogeneous and heterogeneous face recognition. IEEE TPAMI 40(8), 1979–1993 (2017)
Manning, C.D., Raghavan, P., Schütze, H., et al.: Introduction to Information Retrieval, vol. 1. Cambridge University Press, Cambridge (2008)
Ng, A.Y., Jordan, M.I., Weiss, Y.: On spectral clustering: analysis and an algorithm. In: NIPS (2002)
Nie, F., Cai, G., Li, X.: Multi-view clustering and semi-supervised classification with adaptive neighbours. In: AAAI (2017)
Nie, F., Li, J., Li, X., et al.: Parameter-free auto-weighted multiple graph learning: A framework for multiview clustering and semi-supervised classification. In: IJCAI (2016)
Otto, C., Wang, D., Jain, A.K.: Clustering millions of faces by identity. IEEE TPAMI 40(2), 289–303 (2018)
Park, H.S., Jun, C.H.: A simple and fast algorithm for k-medoids clustering. Expert Syst. Appl. 36(2), 3336–3341 (2009)
Qin, J., et al.: Binary coding for partial action analysis with limited observation ratios. In: CVPR (2017)
Qin, J., et al.: Zero-shot action recognition with error-correcting output codes. In: CVPR (2017)
Sculley, D.: Web-scale k-means clustering. In: WWW (2010)
Shao, W., He, L., Lu, C.T., Philip, S.Y.: Online multi-view clustering with incomplete views. In: ICBD (2016)
Shen, F., Zhou, X., Yang, Y., Song, J., Shen, H.T., Tao, D.: A fast optimization method for general binary code learning. IEEE TIP 25(12), 5610–5621 (2016)
Shen, F., et al.: Classification by retrieval: binarizing data and classifier. In: ACM SIGIR (2017)
Shen, F., Shen, C., Liu, W., Tao Shen, H.: Supervised discrete hashing. In: CVPR (2015)
Shen, X.B., Liu, W., Tsang, I.W., Shen, F., Sun, Q.S.: Compressed k-means for large-scale clustering. In: AAAI (2017)
Tzortzis, G., Likas, A.: Kernel-based weighted multi-view clustering. In: ICDM (2012)
Wang, J., Zhang, T., Sebe, N., Shen, H.T., et al.: A survey on learning to hash. IEEE TPAMI 40(4), 769–790 (2017)
Wang, J., Kumar, S., Chang, S.F.: Semi-supervised hashing for scalable image retrieval. In: CVPR (2010)
Wang, X., Guo, X., Lei, Z., Zhang, C., Li, S.Z.: Exclusivity-consistency regularized multi-view subspace clustering. In: CVPR (2017)
Wolf, L., Hassner, T., Maoz, I.: Face recognition in unconstrained videos with matched background similarity. In: CVPR (2011)
Xia, T., Tao, D., Mei, T., Zhang, Y.: Multiview spectral embedding. IEEE TCYB 40(6), 1438–1446 (2010)
Xu, C., Tao, D., Xu, C.: A survey on multi-view learning. arXiv preprint (2013)
Zhang, C., Hu, Q., Fu, H., Zhu, P., Cao, X.: Latent multi-view subspace clustering. In: CVPR (2017)
Zhang, Z., Liu, L., Shen, F., Shen, H.T., Shao, L.: Binary multi-view clustering. IEEE TPAMI (2018). https://doi.org/10.1109/TPAMI.2018.2847335
Zhang, Z., Shao, L., Xu, Y., Liu, L., Yang, J.: Marginal representation learning with graph structure self-adaptation. IEEE TNNLS 29(10), 4645–4659 (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Zhang, Z. et al. (2018). Highly-Economized Multi-view Binary Compression for Scalable Image Clustering. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds) Computer Vision – ECCV 2018. ECCV 2018. Lecture Notes in Computer Science(), vol 11216. Springer, Cham. https://doi.org/10.1007/978-3-030-01258-8_44
Download citation
DOI: https://doi.org/10.1007/978-3-030-01258-8_44
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-01257-1
Online ISBN: 978-3-030-01258-8
eBook Packages: Computer ScienceComputer Science (R0)